Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 439/441 | < Previous Page | 435 436 437 438 439 440 441  | Next Page >

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

  • tmux: Suddenly, cannot horizontally split

    - by A__A__0
    As root, using a reasonably default .profile and .shrc and an empty tmux.conf, I am unable to split the window horizontally. There are a number of cases to consider so I'll list them clearly. Using the keybinding + empty configuration: nothing happens Using the keybinding + my configuration: a bell is generated, nothing else; occasionally, the split will appear and disappear immediately (maybe it always does this, but I'm connecting over ssh so it may not make it through) Using tmux split-window -h with any config: tmux immediately exits I've posted here in order the server and client verbose logs generated by tmux -v during the third case: server started, pid 9523 socket path /tmp/tmux-0/default new client 7 got 100 from client 7 got 101 from client 7 got 102 from client 7 got 103 from client 7 got 104 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 106 from client 7 got 200 from client 7 cmdq 0x801c6e080: new-session (client 7) new term: xterm xterm override: XT xterm override: Ms ]52;%p1%s;%p2%s xterm override: Cs ]12;%p1%s xterm override: Cr ]112 xterm override: Ss [%p1%d q xterm override: Se [2 q new key Oo: 0x1021 (KP/) new key Oj: 0x1022 (KP*) new key Om: 0x1023 (KP-) new key Ow: 0x1024 (KP7) new key Ox: 0x1025 (KP8) new key Oy: 0x1026 (KP9) new key Ok: 0x1027 (KP+) new key Ot: 0x1028 (KP4) new key Ou: 0x1029 (KP5) new key Ov: 0x102a (KP6) new key Oq: 0x102b (KP1) new key Or: 0x102c (KP2) new key Os: 0x102d (KP3) new key OM: 0x102e (KPEnter) new key Op: 0x102f (KP0) new key On: 0x1030 (KP.) new key OA: 0x101d (Up) new key OB: 0x101e (Down) new key OC: 0x1020 (Right) new key OD: 0x101f (Left) new key [A: 0x101d (Up) new key [B: 0x101e (Down) new key [C: 0x1020 (Right) new key [D: 0x101f (Left) new key OH: 0x1018 (Home) new key OF: 0x1019 (End) new key [H: 0x1018 (Home) new key [F: 0x1019 (End) new key Oa: 0x501d (C-Up) new key Ob: 0x501e (C-Down) new key Oc: 0x5020 (C-Right) new key Od: 0x501f (C-Left) new key [a: 0x901d (S-Up) new key [b: 0x901e (S-Down) new key [c: 0x9020 (S-Right) new key [d: 0x901f (S-Left) new key [11^: 0x5002 (C-F1) new key [12^: 0x5003 (C-F2) new key [13^: 0x5004 (C-F3) new key [14^: 0x5005 (C-F4) new key [15^: 0x5006 (C-F5) new key [17^: 0x5007 (C-F6) new key [18^: 0x5008 (C-F7) new key [19^: 0x5009 (C-F8) new key [20^: 0x500a (C-F9) new key [21^: 0x500b (C-F10) new key [23^: 0x500c (C-F11) new key [24^: 0x500d (C-F12) new key [25^: 0x500e (C-F13) new key [26^: 0x500f (C-F14) new key [28^: 0x5010 (C-F15) new key [29^: 0x5011 (C-F16) new key [31^: 0x5012 (C-F17) new key [32^: 0x5013 (C-F18) new key [33^: 0x5014 (C-F19) new key [34^: 0x5015 (C-F20) new key [2^: 0x5016 (C-IC) new key [3^: 0x5017 (C-DC) new key [7^: 0x5018 (C-Home) new key [8^: 0x5019 (C-End) new key [6^: 0x501a (C-NPage) new key [5^: 0x501b (C-PPage) new key [11$: 0x9002 (S-F1) new key [12$: 0x9003 (S-F2) new key [13$: 0x9004 (S-F3) new key [14$: 0x9005 (S-F4) new key [15$: 0x9006 (S-F5) new key [17$: 0x9007 (S-F6) new key [18$: 0x9008 (S-F7) new key [19$: 0x9009 (S-F8) new key [20$: 0x900a (S-F9) new key [21$: 0x900b (S-F10) new key [23$: 0x900c (S-F11) new key [24$: 0x900d (S-F12) new key [25$: 0x900e (S-F13) new key [26$: 0x900f (S-F14) new key [28$: 0x9010 (S-F15) new key [29$: 0x9011 (S-F16) new key [31$: 0x9012 (S-F17) new key [32$: 0x9013 (S-F18) new key [33$: 0x9014 (S-F19) new key [34$: 0x9015 (S-F20) new key [2$: 0x9016 (S-IC) new key [3$: 0x9017 (S-DC) new key [7$: 0x9018 (S-Home) new key [8$: 0x9019 (S-End) new key [6$: 0x901a (S-NPage) new key [5$: 0x901b (S-PPage) new key [11@: 0xd002 (C-S-F1) new key [12@: 0xd003 (C-S-F2) new key [13@: 0xd004 (C-S-F3) new key [14@: 0xd005 (C-S-F4) new key [15@: 0xd006 (C-S-F5) new key [17@: 0xd007 (C-S-F6) new key [18@: 0xd008 (C-S-F7) new key [19@: 0xd009 (C-S-F8) new key [20@: 0xd00a (C-S-F9) new key [21@: 0xd00b (C-S-F10) new key [23@: 0xd00c (C-S-F11) new key [24@: 0xd00d (C-S-F12) new key [25@: 0xd00e (C-S-F13) new key [26@: 0xd00f (C-S-F14) new key [28@: 0xd010 (C-S-F15) new key [29@: 0xd011 (C-S-F16) new key [31@: 0xd012 (C-S-F17) new key [32@: 0xd013 (C-S-F18) new key [33@: 0xd014 (C-S-F19) new key [34@: 0xd015 (C-S-F20) new key [2@: 0xd016 (C-S-IC) new key [3@: 0xd017 (C-S-DC) new key [7@: 0xd018 (C-S-Home) new key [8@: 0xd019 (C-S-End) new key [6@: 0xd01a (C-S-NPage) new key [5@: 0xd01b (C-S-PPage) new key [I: 0x1031 ((null)) new key [O: 0x1032 ((null)) new key OP: 0x1002 (F1) new key OQ: 0x1003 (F2) new key OR: 0x1004 (F3) new key OS: 0x1005 (F4) new key [15~: 0x1006 (F5) new key [17~: 0x1007 (F6) new key [18~: 0x1008 (F7) new key [19~: 0x1009 (F8) new key [20~: 0x100a (F9) new key [21~: 0x100b (F10) new key [23~: 0x100c (F11) new key [24~: 0x100d (F12) new key [2~: 0x1016 (IC) new key [3~: 0x1017 (DC) replacing key OH: 0x1018 (Home) replacing key OF: 0x1019 (End) new key [6~: 0x101a (NPage) new key [5~: 0x101b (PPage) new key [Z: 0x101c (BTab) replacing key OA: 0x101d (Up) replacing key OB: 0x101e (Down) replacing key OD: 0x101f (Left) replacing key OC: 0x1020 (Right) spawn: /bin/sh -- session 0 created writing 207 to client 7 got 208 from client 7 input_parse: '#' ground input_parse: ' ' ground keys are 7 ([?1;2c) received service class 1 complete key [?1;2c 0xfff keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (m) complete key m 0x6d input_parse: 'm' ground keys are 1 (u) complete key u 0x75 input_parse: 'u' ground keys are 1 (x) complete key x 0x78 input_parse: 'x' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (s) complete key s 0x73 input_parse: 's' ground keys are 1 (p) complete key p 0x70 input_parse: 'p' ground keys are 1 (l) complete key l 0x6c input_parse: 'l' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 () complete key 0x7f input_parse: '' ground input_c0_dispatch: ' input_parse: '' ground input_parse: '[' esc_enter input_parse: 'K' csi_enter input_csi_dispatch: 'K' "" "" keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (n) complete key n 0x6e input_parse: 'n' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 (o) complete key o 0x6f input_parse: 'o' ground keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (h) complete key h 0x68 input_parse: 'h' ground keys are 1 ( ) complete key 0xd input_parse: ' ' ground input_c0_dispatch: ' input_parse: ' ' ground input_c0_dispatch: ' new client 13 got 100 from client 13 got 101 from client 13 got 102 from client 13 got 103 from client 13 got 104 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 106 from client 13 got 200 from client 13 cmdq 0x801c6e160: split-window -h (client 13) spawn: /bin/sh -- writing 203 to client 13 input_parse: '#' ground input_parse: ' ' ground input_parse: '#' ground input_parse: ' ' ground lost client 13 session 0 destroyed writing 203 to client 7 got 205 from client 7 writing 204 to client 7 lost client 7 got 207 from server got 203 from server got 204 from server There are some other peculiarities: With a newly created user (from which I overwrote root's .profile and .shrc, tmux works perfectly. Occasionally (twice out of the 50 or so times I've tested it), the splitting will work fine once in a session. (This happened for example when I ran ktrace on tmux, which I can also post) To explain the 'suddenly' part of the title: when I started my newly updated mysql56-server, tmux immediately exited and lost the session. Recently I changed architectures, from FreeBSD 10.0 i386 to amd64, and I am still working through shared library incompatibilities. I suspect that this could be involved, but I can't imagine how an incompatibility of this sort could result in such a specific, isolated failure.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • CodePlex Daily Summary for Friday, November 26, 2010

    CodePlex Daily Summary for Friday, November 26, 2010Popular ReleasesMath.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.WatchersNET.SiteMap: WatchersNET.SiteMap 01.03.02: Whats NewNew Tax Filter, You can now select which Terms you want to Use.TextGen - Another Template Based Text Generator: TextGen v0.2: This is the first version of TextGen exposing its core functionality to COM. See the Access demo (Access 2000 file format) included in the package. For installation and usage instructions see ReadMe.txt. Have fun and provide feedback!Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Typps (formerly jiffycms) wysiwyg rich text HTML editor for ASP.NET AJAX: Typps 2.9: -When uploading files (not images), through the file uploader and the multi-file uploader, FileUploaded and MultiFileUploaded event handlers were reporting an empty event argument, this is fixed now. -Fixed also url field not updating when uploading a file ( not image)Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.SQL Monitor: SQL Monitor 1.4: 1.added automatically load sql server instances 2.added friendly wait cursor 3.fixed problem with 4.0 fx 4.added exception handlingLateBindingApi.Excel: LateBindingApi.Excel Release 0.7f (fixed): Unterschiede zur Vorgängerversion: - XlConverter.ToRgb umbenannt zu XlConverter.ToDouble - XlConverter.GetFileExtension hinzugefügt (.xls oder .xlsx) - Insert Methoden+Overloads für Range und ShapeNodes - Xml Doku im Code entfernt Release+Samples V0.7f: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Examp...Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6MDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...New Projects.NET DroneController: The .NET DroneController makes it easy to write applications that allow you to control an ARDrone quadricopter. BELT (A PowerShell Snapin for IE Browser Automation): BELT is a PowerShell snapin for IE browser automation. BELT makes it easier to control IE by PowerShell. "BELT" originally stands for "Browser Element Locating Tool".BusStationInfo: BusStationInfoDadaist: Dadaist is a random natural language generator that allows creation of random yet understandable (and often crazy) placeholder text for web designers. It could one day replace "Lorem ipsum" altogether! It is developed in C# and is a console application.Darskade LMS: Darskade is a kind of Learning Management System (LMS as a branch of CMS) and its main goal is to help professors and teacher assistants to manage classes, communicate with students, upload course contents ,put assignments and grades on it and more.ESRI for TableTops: An extension of the ESRI Api for use on digital tabletops and multitouch surfaces.Execute a SQL Server Agent Job - SSIS Package: The focus of SSMSAGENTJOBVBS is to explain how you can execute a SQL Server Agent Job remotely with the SSIS package, or DTSX file, as a job step. Look at the source code and dissect. Format.NET: Format.NET is an easy to use library to enable advanced and smart object formatting in .NET projects. Extends the default String.Format(...) allowing property resolution, custom formatting and text alignment.Gallery.net: Gallery.net is a tag-based image management solution, initially targeting for High Schools to manage and organize their large digital image assets. It is developed in C#, ASP.Net MVC3. GroupChallenge: GroupChallenge is a trivia game for one or more simultaneous players to interactively answer questions and submit questions to a hosted game server. Source code includes a WCF Data Service (OData) server, Windows Phone client, and a Silverlight client. Great for User Groups.GSWork: Manage clients and events in your company in a fast and efficient, without relying on a physical file. Ideal for companies with workers with no need to be in the same place.hermesystest1: ????? test ???.HTC Sense Util: Windows Mobile application for managing HTC Sense Tabs. The application will control sense and manage the tab control file.IdentityChecker: IdentityCheckerIndexed Material Splatting for Terrain Rendering Demo: This project is a demo using Indexed Material Splatting technique for terrain renderingIntegra: PFCKinet SDK: Kinect SDKLyra 3: Lyra is a small Windows Application to create and manage song-books. Each song-book may contain an arbitrary number of songs which can be created or edited within Lyra and projected using any screen device (e.g. a beamer). Support for full text search and presentation templates.NETWorking: NETWorking allows developers to easily integrate Internet connectivity into their applications. Be it a fault-tolerant distributed cluster or a peer-to-peer system, NETWorking exposes high-performance classes to facilitate application needs in a concise and understandable manner.RAD Platform: Rapid Application Development (RAD) Platform is a free run time report/form/chart designer and generator engine and multi-platform web/windows application servers. New generation of database independent .Net software framework. Design once, run at once on web and windows. Enjoy!Razor Repository Engine: <project name>RazorRepository</project> <programming language>C#</programming language> <activity>Finished</activity>Scientific Calculator: HTML 5 and CSS 3: Online SCIENTIFIC CALCULATOR implements latest features, available in HTML 5, CSS 3 and jQuery. Developed as Rich Internet Application (RIA) w/extremely small footprint (<20kb), it demonstrates best scripting techniques and coding practices; does not require any image files.Seguimiento Facon: Control de seguimiento faconSharePoint Social: Have you ever needed to show the facebook, twitter updates of your organization on your sharepoint portal ? If yes, this project is for you. Simple auto update tool: Makes a easy way to auto update your client software.SteelBattalion.NET: .NET-based library for interfacing with the original Steel Battalion X-Box controller. Utilizes LibUSB drivers. Works on 32-bit and 64-bit platforms, including Windows 7.SystemTimeFreezer: Freezes system datetime to some value you've selectedTeam Explorer Remover: Utility to completely remove TFS Team Explorer from VS.NET 2010. * Removes all registry entries regarding Team Explorer. * Removes all files related to Team Explorer. * Removes all VS.NET commands, menu items and hyperlinks related to Team Explorer (incl. one on StartUp page)TIMESHARE-SELLERS: TIMESHARE-SELLERSVsi Builder 2010: Vsi Builder is an extension for Visual Studio 2010 which allows packaging code snippets and old-style add-ins into .Vsi redistributable packagesWCF Peer Resolver: WCF Peer Resolver is an very extendable and simplified resolver framework to use instead of the built in CurstomPeerResolver in .Net Framework. It is completely developed in C# with .Net Framework 3.5.wsPDV2010: Primeiro desenvolvimento experimental de PDV.

    Read the article

  • CodePlex Daily Summary for Friday, June 15, 2012

    CodePlex Daily Summary for Friday, June 15, 2012Popular ReleasesStackBuilder: StackBuilder 1.0.8.0: + Corrected a few bugs in batch processor + Corrected a bug in layer thumbnail generatorAutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.SharePoint KnowledgeBase (ISC 2012): ISC.KBase.Advanced: A full-trust version of the Base application. This version uses a Managed Metadata column for categories and contains other enhancements.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingSP Sticky Notes: SPSticky Solution Package: Install steps:Install the sandbox solution to the site collection Activate the solution Add a SPSticky web part (from a 'Custom' group) to a page that resides on the same web as the StickyList list Initial version, supported functionality: Moving a sticky around (position is saved in list) Resizing of the sticky (size is not saved) Changing the text of the sticky (saved in list) Saving occurs when you click away from the sticky being edited Stickys are filtered based on the curre...New ProjectsAdvanced Utility Libs: This project provides many basic and advanced utility libs for developer.ATopSearchEngine: ATopSearchEngine, a class projectAudio Player 3D: A WPF 3D Audio Player.beginABC: day la project test :)BookmarkSave For VB6: An addin, written in VB.net, for VB6 that preserves Bookmarks and Breakpoints between runs of the IDE.Centos 5 & 6 Managements Packs For System Center Operations Manager 2012: Centos 5 & 6 Managements Packs For System Center Operations Manager 2012ChildProcesses - Child Process Management for .NET: ChildProcesses.NET is a child process management library for the .NET framework. It allows to create child processes, and provides bidirectional extendable interprocess communication based on WCF and NamedPipes out of the box Child and parent processes monitor each other and notify about termination or other events. Child processes are an alternative to AppDomains when parent must continue if the child crashes. It is also poossible to mix 32Bit and 64 Bit processes. Cookie Free Analytics: Cookie Free Analytics (CFA) is a free server side Google Analytics tracking solution for Windows based websites running IIS & ASP.NET. 100% javascript & cookie free - with CFA you can track your visitors & file downloads in Google Analytics without cookies or JavaScript. Ideal for mobile websites or those affected by the EU Cookie law.F# (FSharp) based Windows Phone 7.5 software to answer: Where am I?: Location Finder F# (FSharp) based Windows Phone 7.5 software to answer the simple question: - Where am I? Henyo Word: The summary is required.Houma-Thibodaux .NET User Group: The CodePlex home of the Houma-Thibodaux .NET user group.JqGridHelper: some code for using jqgrid 1. javacsript function for customer mutil-search 2. C# function for parse querystring and build sql query command 3. a sql procedure templete for searching NetFluid.IRC: Mibbit clone in NetFluid with the possibilities to use permanent eggdrop in C# nGo: nGo frameworkNinja Client-Server Game: Naruto based online client-server game.NoLinEq: NoLinEq is a mathematical tool for solving nonlinear equations using different methods.Oculus: Oculus is a real-time ETW listening service that is plugin-basedPinoy Henyo: Response.Redirect (https://henyoword.codeplex.com)Razorblade: Razorblade is intended to be a template for WebDesign/WebDevelopment. It is based on HTML5Boilerplate and uses ASP.NET wtih the Razor Syntax (C#). Original HTML5Boilerplate comments are intact with some added comments to explain some of the Razor Syntax.ServiceFramework: The ServiceFramework project is a framework built using Microsoft WCF to address architectural governance issues by tracking the activity of services.Sharp Console: Sharp Console is a Windows Command Line (WCL)alternative written in C#. It targets those who lack access to the WCL or simply wish to use the NET framework instead. It aims to provide the same (and more!) functionality as the WCL. Contribute anything you feel will make it better!SharpKit.on{X}: This project allows you to write on{x} rules using C#shequ01: shequ01 websiteSilena: Porta lectus ac sed scelerisque, pellentesque, cras a et urna enim ultricies, tristique magnis nec proin. Velit tristique, aliquet quis ut mid lorem eros? Dolor magna. Aliquet. Et? Sociis pellentesque, tincidunt aenean cursus, pulvinar! Placerat etiam! Mid eu? Vut a. Placerat duis, risus adipiscing lacus, sagittis magna vut etiam, sed. Et ridiculus! Et sit, enim scelerisque velit tristique vel? Adipiscing vel adipiscing eu. Enim proin egestas lorem, aenean lectus enim vel nisi, augue duis pla...Silverlight 5 Chat: Full-duplex Publish/Subscribe -pattern on TCP/IP: F-Sharp (F#) WCF "PollingDuplex" WebService with SL5 FSharp clientSimple CQRS implemented with F#: Simple CQRS on F# (F-Sharp) 3.0 SOD: sustainable office designerSolvis Control II Viewer: SolvisSC2Viewer kann die Logdaten der Solvis Heizung Steuerung SolvisControl 2 auslesen und grafisch darstellen. testdd061412012tfs01: bvtestddgit061420123: eswrtestddgit061420124: fgtestddhg061420121: sdtestgit061420121: xctesthg06142012hg3: dfsTurismo Digital: Turismo DigitlUmbraco 5 File Picker: ????Umbraco 5 ???????。????????????????????。WeatherFrame: Windows Phone weather app.

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Remove box2d bodies after collision deduction android?

    - by jubin
    Can any one explain me how to destroy box2d body when collide i have tried but my application crashed.First i have checked al collisions then add all the bodies in array who i want to destroy.I am trying to learning this tutorial My all the bodies are falling i want these bodies should destroy when these bodies will collide my actor monkey but when it collide it destroy but my aplication crashed.I have googled and from google i got the application crash reasons we should not destroy body in step funtion but i am removing body in the last of tick method. could any one help me or provide me code aur check my code why i am getting this prblem or how can i destroy box2d bodies. This is my code what i am doing. Please could any one check my code and tell me what is i am doing wrong for removing bodies. The code is for multiple box2d objects falling on my actor monkey it should be destroy when it will fall on the monkey.It is destroing but my application crahes. static class Box2DLayer extends CCLayer { protected static final float PTM_RATIO = 32.0f; protected static final float WALK_FACTOR = 3.0f; protected static final float MAX_WALK_IMPULSE = 0.2f; protected static final float ANIM_SPEED = 0.3f; int isLeft=0; String dir=""; int x =0; float direction; CCColorLayer objectHint; // protected static final float PTM_RATIO = 32.0f; protected World _world; protected static Body spriteBody; CGSize winSize = CCDirector.sharedDirector().winSize(); private static int count = 200; protected static Body monkey_body; private static Body bodies; CCSprite monkey; float animDelay; int animPhase; CCSpriteSheet danceSheet = CCSpriteSheet.spriteSheet("phases.png"); CCSprite _block; List<Body> toDestroy = new ArrayList<Body>(); //CCSpriteSheet _spriteSheet; private static MyContactListener _contactListener = new MyContactListener(); public Box2DLayer() { this.setIsAccelerometerEnabled(true); CCSprite bg = CCSprite.sprite("jungle.png"); addChild(bg,0); bg.setAnchorPoint(0,0); bg.setPosition(0,0); CGSize s = CCDirector.sharedDirector().winSize(); // Use scaled width and height so that our boundaries always match the current screen float scaledWidth = s.width/PTM_RATIO; float scaledHeight = s.height/PTM_RATIO; Vector2 gravity = new Vector2(0.0f, -30.0f); boolean doSleep = false; _world = new World(gravity, doSleep); // Create edges around the entire screen // Define the ground body. BodyDef bxGroundBodyDef = new BodyDef(); bxGroundBodyDef.position.set(0.0f, 0.0f); // The body is also added to the world. Body groundBody = _world.createBody(bxGroundBodyDef); // Register our contact listener // Define the ground box shape. PolygonShape groundBox = new PolygonShape(); Vector2 bottomLeft = new Vector2(0f,0f); Vector2 topLeft = new Vector2(0f,scaledHeight); Vector2 topRight = new Vector2(scaledWidth,scaledHeight); Vector2 bottomRight = new Vector2(scaledWidth,0f); // bottom groundBox.setAsEdge(bottomLeft, bottomRight); groundBody.createFixture(groundBox,0); // top groundBox.setAsEdge(topLeft, topRight); groundBody.createFixture(groundBox,0); // left groundBox.setAsEdge(topLeft, bottomLeft); groundBody.createFixture(groundBox,0); // right groundBox.setAsEdge(topRight, bottomRight); groundBody.createFixture(groundBox,0); CCSprite floorbg = CCSprite.sprite("grassbehind.png"); addChild(floorbg,1); floorbg.setAnchorPoint(0,0); floorbg.setPosition(0,0); CCSprite floorfront = CCSprite.sprite("grassfront.png"); floorfront.setTag(2); this.addBoxBodyForSprite(floorfront); addChild(floorfront,3); floorfront.setAnchorPoint(0,0); floorfront.setPosition(0,0); addChild(danceSheet); //CCSprite monkey = CCSprite.sprite(danceSheet, CGRect.make(0, 0, 48, 73)); //addChild(danceSprite); monkey = CCSprite.sprite("arms_up.png"); monkey.setTag(2); monkey.setPosition(200,100); BodyDef spriteBodyDef = new BodyDef(); spriteBodyDef.type = BodyType.DynamicBody; spriteBodyDef.bullet=true; spriteBodyDef.position.set(200 / PTM_RATIO, 300 / PTM_RATIO); monkey_body = _world.createBody(spriteBodyDef); monkey_body.setUserData(monkey); PolygonShape spriteShape = new PolygonShape(); spriteShape.setAsBox(monkey.getContentSize().width/PTM_RATIO/2, monkey.getContentSize().height/PTM_RATIO/2); FixtureDef spriteShapeDef = new FixtureDef(); spriteShapeDef.shape = spriteShape; spriteShapeDef.density = 2.0f; spriteShapeDef.friction = 0.70f; spriteShapeDef.restitution = 0.0f; monkey_body.createFixture(spriteShapeDef); //Vector2 force = new Vector2(10, 10); //monkey_body.applyLinearImpulse(force, spriteBodyDef.position); addChild(monkey,10000); this.schedule(tickCallback); this.schedule(createobjects, 2.0f); objectHint = CCColorLayer.node(ccColor4B.ccc4(255,0,0,128), 200f, 100f); addChild(objectHint, 15000); objectHint.setVisible(false); _world.setContactListener(_contactListener); } private UpdateCallback tickCallback = new UpdateCallback() { public void update(float d) { tick(d); } }; private UpdateCallback createobjects = new UpdateCallback() { public void update(float d) { secondUpdate(d); } }; private void secondUpdate(float dt) { this.addNewSprite(); } public void addBoxBodyForSprite(CCSprite sprite) { BodyDef spriteBodyDef = new BodyDef(); spriteBodyDef.type = BodyType.StaticBody; //spriteBodyDef.bullet=true; spriteBodyDef.position.set(sprite.getPosition().x / PTM_RATIO, sprite.getPosition().y / PTM_RATIO); spriteBody = _world.createBody(spriteBodyDef); spriteBody.setUserData(sprite); Vector2 verts[] = { new Vector2(-11.8f / PTM_RATIO, -24.5f / PTM_RATIO), new Vector2(11.7f / PTM_RATIO, -24.0f / PTM_RATIO), new Vector2(29.2f / PTM_RATIO, -14.0f / PTM_RATIO), new Vector2(28.7f / PTM_RATIO, -0.7f / PTM_RATIO), new Vector2(8.0f / PTM_RATIO, 18.2f / PTM_RATIO), new Vector2(-29.0f / PTM_RATIO, 18.7f / PTM_RATIO), new Vector2(-26.3f / PTM_RATIO, -12.2f / PTM_RATIO) }; PolygonShape spriteShape = new PolygonShape(); spriteShape.set(verts); //spriteShape.setAsBox(sprite.getContentSize().width/PTM_RATIO/2, //sprite.getContentSize().height/PTM_RATIO/2); FixtureDef spriteShapeDef = new FixtureDef(); spriteShapeDef.shape = spriteShape; spriteShapeDef.density = 2.0f; spriteShapeDef.friction = 0.70f; spriteShapeDef.restitution = 0.0f; spriteShapeDef.isSensor=true; spriteBody.createFixture(spriteShapeDef); } public void addNewSprite() { count=0; Random rand = new Random(); int Number = rand.nextInt(10); switch(Number) { case 0: _block = CCSprite.sprite("banana.png"); break; case 1: _block = CCSprite.sprite("backpack.png");break; case 2: _block = CCSprite.sprite("statue.png");break; case 3: _block = CCSprite.sprite("pineapple.png");break; case 4: _block = CCSprite.sprite("bananabunch.png");break; case 5: _block = CCSprite.sprite("hat.png");break; case 6: _block = CCSprite.sprite("canteen.png");break; case 7: _block = CCSprite.sprite("banana.png");break; case 8: _block = CCSprite.sprite("statue.png");break; case 9: _block = CCSprite.sprite("hat.png");break; } int padding=20; //_block.setPosition(CGPoint.make(100, 100)); // Determine where to spawn the target along the Y axis CGSize winSize = CCDirector.sharedDirector().displaySize(); int minY = (int)(_block.getContentSize().width / 2.0f); int maxY = (int)(winSize.width - _block.getContentSize().width / 2.0f); int rangeY = maxY - minY; int actualY = rand.nextInt(rangeY) + minY; // Create block and add it to the layer float xOffset = padding+_block.getContentSize().width/2+((_block.getContentSize().width+padding)*count); _block.setPosition(CGPoint.make(actualY, 750)); _block.setTag(1); float w = _block.getContentSize().width; objectHint.setVisible(true); objectHint.changeWidth(w); objectHint.setPosition(actualY-w/2, 460); this.addChild(_block,10000); // Create ball body and shape BodyDef ballBodyDef1 = new BodyDef(); ballBodyDef1.type = BodyType.DynamicBody; ballBodyDef1.position.set(actualY/PTM_RATIO, 480/PTM_RATIO); bodies = _world.createBody(ballBodyDef1); bodies.setUserData(_block); PolygonShape circle1 = new PolygonShape(); Vector2 verts[] = { new Vector2(-11.8f / PTM_RATIO, -24.5f / PTM_RATIO), new Vector2(11.7f / PTM_RATIO, -24.0f / PTM_RATIO), new Vector2(29.2f / PTM_RATIO, -14.0f / PTM_RATIO), new Vector2(28.7f / PTM_RATIO, -0.7f / PTM_RATIO), new Vector2(8.0f / PTM_RATIO, 18.2f / PTM_RATIO), new Vector2(-29.0f / PTM_RATIO, 18.7f / PTM_RATIO), new Vector2(-26.3f / PTM_RATIO, -12.2f / PTM_RATIO) }; circle1.set(verts); FixtureDef ballShapeDef1 = new FixtureDef(); ballShapeDef1.shape = circle1; ballShapeDef1.density = 10.0f; ballShapeDef1.friction = 0.0f; ballShapeDef1.restitution = 0.1f; bodies.createFixture(ballShapeDef1); count++; //Remove(); } @Override public void ccAccelerometerChanged(float accelX, float accelY, float accelZ) { //Apply the directional impulse /*float impulse = monkey_body.getMass()*accelY*WALK_FACTOR; Vector2 force = new Vector2(impulse, 0); monkey_body.applyLinearImpulse(force, monkey_body.getWorldCenter());*/ walk(accelY); //Remove(); } private void walk(float accelY) { // TODO Auto-generated method stub direction = accelY; } private void Remove() { for (Iterator<MyContact> it1 = _contactListener.mContacts.iterator(); it1.hasNext();) { MyContact contact = it1.next(); Body bodyA = contact.fixtureA.getBody(); Body bodyB = contact.fixtureB.getBody(); // See if there's any user data attached to the Box2D body // There should be, since we set it in addBoxBodyForSprite if (bodyA.getUserData() != null && bodyB.getUserData() != null) { CCSprite spriteA = (CCSprite) bodyA.getUserData(); CCSprite spriteB = (CCSprite) bodyB.getUserData(); // Is sprite A a cat and sprite B a car? If so, push the cat // on a list to be destroyed... if (spriteA.getTag() == 1 && spriteB.getTag() == 2) { //Log.v("dsfds", "dsfsd"+bodyA); //_world.destroyBody(bodyA); // removeChild(spriteA, true); toDestroy.add(bodyA); } // Is sprite A a car and sprite B a cat? If so, push the cat // on a list to be destroyed... else if (spriteA.getTag() == 2 && spriteB.getTag() == 1) { //Log.v("dsfds", "dsfsd"+bodyB); toDestroy.add(bodyB); } } } // Loop through all of the box2d bodies we want to destroy... for (Iterator<Body> it1 = toDestroy.iterator(); it1.hasNext();) { Body body = it1.next(); // See if there's any user data attached to the Box2D body // There should be, since we set it in addBoxBodyForSprite if (body.getUserData() != null) { // We know that the user data is a sprite since we set // it that way, so cast it... CCSprite sprite = (CCSprite) body.getUserData(); // Remove the sprite from the scene _world.destroyBody(body); removeChild(sprite, true); } // Destroy the Box2D body as well // _contactListener.mContacts.remove(0); } } public synchronized void tick(float delta) { synchronized (_world) { _world.step(delta, 8, 3); //_world.clearForces(); //addNewSprite(); } CCAnimation danceAnimation = CCAnimation.animation("dance", 1.0f); // Iterate over the bodies in the physics world Iterator<Body> it = _world.getBodies(); while(it.hasNext()) { Body b = it.next(); Object userData = b.getUserData(); if (userData != null && userData instanceof CCSprite) { //Synchronize the Sprites position and rotation with the corresponding body CCSprite sprite = (CCSprite)userData; if(sprite.getTag()==1) { //b.applyLinearImpulse(force, pos); sprite.setPosition(b.getPosition().x * PTM_RATIO, b.getPosition().y * PTM_RATIO); sprite.setRotation(-1.0f * ccMacros.CC_RADIANS_TO_DEGREES(b.getAngle())); } else { //Apply the directional impulse float impulse = monkey_body.getMass()*direction*WALK_FACTOR; Vector2 force = new Vector2(impulse, 0); b.applyLinearImpulse(force, b.getWorldCenter()); sprite.setPosition(b.getPosition().x * PTM_RATIO, b.getPosition().y * PTM_RATIO); animDelay -= 1.0f/60.0f; if(animDelay <= 0) { animDelay = ANIM_SPEED; animPhase++; if(animPhase > 2) { animPhase = 1; } } if(direction < 0 ) { isLeft=1; } else { isLeft=0; } if(isLeft==1) { dir = "left"; } else { dir = "right"; } float standingLimit = (float) 0.1f; float vX = monkey_body.getLinearVelocity().x; if((vX > -standingLimit)&& (vX < standingLimit)) { // Log.v("sasd", "standing"); } else { } } } } Remove(); } } Sorry for my english. Thanks in advance.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • testing Clojure in Maven

    - by Ralph
    I am new at Maven and even newer at Clojure. As an exercise to learn the language, I am writing a spider solitaire player program. I also plan on writing a similar program in Scala to compare the implementations (see my post http://stackoverflow.com/questions/2571267/modern-java-alternatives-closed). I have configured a Maven directory structure containing the usual src/main/clojure and src/test/clojure directories. My pom.xml file includes the clojure-maven-plugin. When I run "mvn test", it displays "No tests to run", despite my having test code in the src/test/clojure directory. As I misnaming something? Here is my pom.xml file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>SpiderPlayer</groupId> <artifactId>SpiderPlayer</artifactId> <version>1.0.0-SNAPSHOT</version> <inceptionYear>2010</inceptionYear> <packaging>jar</packaging> <properties> <maven.build.timestamp.format>yyMMdd.HHmm</maven.build.timestamp.format> <main.dir>org/dogdaze/spider_player</main.dir> <main.package>org.dogdaze.spider_player</main.package> <main.class>${main.package}.Main</main.class> </properties> <build> <sourceDirectory>src/main/clojure</sourceDirectory> <testSourceDirectory>src/main/clojure</testSourceDirectory> <plugins> <plugin> <groupId>com.theoryinpractise</groupId> <artifactId>clojure-maven-plugin</artifactId> <version>1.3.1</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <goals> <goal>run</goal> </goals> <phase>generate-sources</phase> <configuration> <tasks> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" message="(ns ${main.package})${line.separator}"/> <echo file="${project.build.sourceDirectory}/${main.dir}/Version.clj" append="true" message="(def version &quot;${maven.build.timestamp}&quot;)${line.separator}"/> </tasks> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <executions> <execution> <goals> <goal>single</goal> </goals> <phase>package</phase> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>${main.class}</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <redirectTestOutputToFile>true</redirectTestOutputToFile> <skipTests>false</skipTests> <skip>false</skip> </configuration> <executions> <execution> <id>surefire-it</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> <scope>compile</scope> </dependency> </dependencies> </project> Here is my Clojure source file (src/main/clojure/org/dogdaze/spider_player/Deck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player.Deck (:use [clojure.contrib.seq-utils :only (shuffle)])) (def suits [:clubs :diamonds :hearts :spades]) (def ranks [:ace :two :three :four :five :six :seven :eight :nine :ten :jack :queen :king]) (defn suit-seq "Return 4 suits: if number-of-suits == 1: :clubs :clubs :clubs :clubs if number-of-suits == 2: :clubs :diamonds :clubs :diamonds if number-of-suits == 4: :clubs :diamonds :hearts :spades." [number-of-suits] (take 4 (cycle (take number-of-suits suits)))) (defstruct card :rank :suit) (defn unshuffled-deck "Create an unshuffled deck containing all cards from the number of suits specified." [number-of-suits] (for [rank ranks suit (suit-seq number-of-suits)] (struct card rank suit))) (defn deck "Create a shuffled deck containing all cards from the number of suits specified." [number-of-suits] (shuffle (unshuffled-deck number-of-suits))) Here is my test case (src/test/clojure/org/dogdaze/spider_player/TestDeck.clj): ; Copyright 2010 Dogdaze (ns org.dogdaze.spider_player (:use clojure.set clojure.test org.dogdaze.spider_player.Deck)) (deftest test-suit-seq (is (= (suit-seq 1) [:clubs :clubs :clubs :clubs])) (is (= (suit-seq 2) [:clubs :diamonds :clubs :diamonds])) (is (= (suit-seq 4) [:clubs :diamonds :hearts :spades]))) (def one-suit-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :ace, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :two, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :three, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :four, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :five, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :six, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :seven, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :eight, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :nine, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :ten, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :jack, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :queen, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs} {:rank :king, :suit :clubs}]) (def two-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds}]) (def four-suits-deck [{:rank :ace, :suit :clubs} {:rank :ace, :suit :diamonds} {:rank :ace, :suit :hearts} {:rank :ace, :suit :spades} {:rank :two, :suit :clubs} {:rank :two, :suit :diamonds} {:rank :two, :suit :hearts} {:rank :two, :suit :spades} {:rank :three, :suit :clubs} {:rank :three, :suit :diamonds} {:rank :three, :suit :hearts} {:rank :three, :suit :spades} {:rank :four, :suit :clubs} {:rank :four, :suit :diamonds} {:rank :four, :suit :hearts} {:rank :four, :suit :spades} {:rank :five, :suit :clubs} {:rank :five, :suit :diamonds} {:rank :five, :suit :hearts} {:rank :five, :suit :spades} {:rank :six, :suit :clubs} {:rank :six, :suit :diamonds} {:rank :six, :suit :hearts} {:rank :six, :suit :spades} {:rank :seven, :suit :clubs} {:rank :seven, :suit :diamonds} {:rank :seven, :suit :hearts} {:rank :seven, :suit :spades} {:rank :eight, :suit :clubs} {:rank :eight, :suit :diamonds} {:rank :eight, :suit :hearts} {:rank :eight, :suit :spades} {:rank :nine, :suit :clubs} {:rank :nine, :suit :diamonds} {:rank :nine, :suit :hearts} {:rank :nine, :suit :spades} {:rank :ten, :suit :clubs} {:rank :ten, :suit :diamonds} {:rank :ten, :suit :hearts} {:rank :ten, :suit :spades} {:rank :jack, :suit :clubs} {:rank :jack, :suit :diamonds} {:rank :jack, :suit :hearts} {:rank :jack, :suit :spades} {:rank :queen, :suit :clubs} {:rank :queen, :suit :diamonds} {:rank :queen, :suit :hearts} {:rank :queen, :suit :spades} {:rank :king, :suit :clubs} {:rank :king, :suit :diamonds} {:rank :king, :suit :hearts} {:rank :king, :suit :spades}]) (deftest test-unshuffled-deck (is (= (unshuffled-deck 1) one-suit-deck)) (is (= (unshuffled-deck 2) two-suits-deck)) (is (= (unshuffled-deck 4) four-suits-deck))) (deftest test-shuffled-deck (is (= (set (deck 1)) (set one-suit-deck))) (is (= (set (deck 2)) (set two-suits-deck))) (is (= (set (deck 4)) (set four-suits-deck)))) (run-tests) Any idea why the test is not running? BTW, feel free to suggest improvements to the Clojure code. Thanks, Ralph

    Read the article

  • while I scroll between the layout it takes too long to be able to scroll between the gallerie's pictures. Is there any way to reduce this time?

    - by Mateo
    Hello, this is my first question here, though I've being reading this forum for quite a while. Most of the answers to my doubts are from here :) Getting back on topic. I'm developing an Android application. I'm drawing a dynamic layout that are basically Galleries, inside a LinearLayout, inside a ScrollView, inside a RelativeLayout. The ScrollView is a must, because I'm drawing a dynamic amount of galleries that most probably will not fit on the screen. When I scroll inside the layout, I have to wait 3/4 seconds until the ScrollView "deactivates" to be able to scroll inside the galleries. What I want to do is to reduce this time to a minimum. Preferably I would like to be able to scroll inside the galleries as soon as I lift my finger from the screen, though anything lower than 2 seconds would be great as well. I've being googling around for a solution but all I could find until now where layout tutorials that didn't tackle this particular issue. I was hoping someone here knows if this is possible and if so to give me some hints on how to do so. I would prefer not to do my own ScrollView to solve this. But if that is the only way I would appreciate some help because I'm not really sure how would I solve this issue by doing that. this is my layout: public class PicturesL extends Activity implements OnClickListener, OnItemClickListener, OnItemLongClickListener { private ArrayList<ImageView> imageView = new ArrayList<ImageView>(); private StringBuilder PicsDate = new StringBuilder(); private CaWaApplication application; private long ListID; private ArrayList<Gallery> gallery = new ArrayList<Gallery>(); private ArrayList<Bitmap> Thumbails = new ArrayList<Bitmap>(); private String idioma; private ArrayList<Long> Days = new ArrayList<Long>(); private long oldDay; private long oldThumbsLoaded; private ArrayList<Long> ThumbailsDays = new ArrayList<Long>(); private ArrayList<ArrayList<Long>> IDs = new ArrayList<ArrayList<Long>>(); @Override public void onCreate(Bundle savedInstancedState) { super.onCreate(savedInstancedState); RelativeLayout layout = new RelativeLayout(this); ScrollView scroll = new ScrollView(this); LinearLayout realLayout = new LinearLayout(this); ArrayList<TextView> texts = new ArrayList<TextView>(); Button TakePic = new Button(this); idioma = com.mateloft.cawa.prefs.getLang(this); if (idioma.equals("en")) { TakePic.setText("Take Picture"); } else if (idioma.equals("es")) { TakePic.setText("Sacar Foto"); } RelativeLayout.LayoutParams scrollLP = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.FILL_PARENT, RelativeLayout.LayoutParams.FILL_PARENT); layout.addView(scroll, scrollLP); realLayout.setOrientation(LinearLayout.VERTICAL); realLayout.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); scroll.addView(realLayout); TakePic.setId(67); TakePic.setOnClickListener(this); application = (CaWaApplication) getApplication(); ListID = getIntent().getExtras().getLong("listid"); getAllThumbailsOfID(); LinearLayout.LayoutParams TakeLP = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); realLayout.addView(TakePic); oldThumbsLoaded = 0; int galler = 100; for (int z = 0; z < Days.size(); z++) { ThumbailsManager croppedThumbs = new ThumbailsManager(Thumbails, oldThumbsLoaded, ThumbailsDays.get(z)); oldThumbsLoaded = ThumbailsDays.get(z); texts.add(new TextView(this)); texts.get(z).setText("Day " + Days.get(z).toString()); gallery.add(new Gallery(this)); gallery.get(z).setAdapter(new ImageAdapter(this, croppedThumbs.getGallery(), 250, 175, true, ListID)); gallery.get(z).setOnItemClickListener(this); gallery.get(z).setOnItemLongClickListener(this); gallery.get(z).setId(galler); galler++; realLayout.addView(texts.get(z)); realLayout.addView(gallery.get(z)); } Log.d("PicturesL", "ListID: " + ListID); setContentView(layout); } private void getAllThumbailsOfID() { ArrayList<ModelPics> Pictures = new ArrayList<ModelPics>(); ArrayList<String> ThumbailsPath = new ArrayList<String>(); Pictures = application.dataManager.selectAllPics(); long thumbpathloaded = 0; int currentID = 0; for (int x = 0; x < Pictures.size(); x++) { if (Pictures.get(x).walkname == ListID) { if (Days.size() == 0) { Days.add(Pictures.get(x).day); oldDay = Pictures.get(x).day; IDs.add(new ArrayList<Long>()); currentID = 0; } if (oldDay != Pictures.get(x).day) { oldDay = Pictures.get(x).day; ThumbailsDays.add(thumbpathloaded); Days.add(Pictures.get(x).day); IDs.add(new ArrayList<Long>()); currentID++; } StringBuilder tpath = new StringBuilder(); tpath.append(Pictures.get(x).path.substring(0, Pictures.get(x).path.length() - 4)); tpath.append("-t.jpg"); IDs.get(currentID).add(Pictures.get(x).id); ThumbailsPath.add(tpath.toString()); thumbpathloaded++; if (x == Pictures.size() - 1) { Log.d("PicturesL", "El ultimo de los arrays, tamaño: " + Days.size()); ThumbailsDays.add(thumbpathloaded); } } } for (int y = 0; y < ThumbailsPath.size(); y++) { Thumbails.add(BitmapFactory.decodeFile(ThumbailsPath.get(y))); } } I had a memory leak on another activity when screen orientation changed that was making it slower, now it is working better. The scroller is not locking up. But sometimes, when it stops scrolling, it takes a few seconds (2/3) to disable itself. I just want it to be a little more dynamic, is there any way to override the listener and make it stop scrolling ON_ACTION_UP or something like that? I don't want to use the listview because I want to have each gallery separated by other views, now I just have text, but I will probably separate them with images with a different size than the galleries. I'm not really sure if this is possible with a listadapter and a listview, I assumed that a view can only handle only one type of object, so I'm using a scrollview of a layout, if I'm wrong please correct me :) Also this activity works as a preview or selecting the pictures you want to view in full size and manage their values. So its working only with thumbnails. Each one weights 40 kb. Guessing that is very unlikely that a user gets more than 1000~1500 pictures in this view, i thought that the activity wouldn't use more than 40~50 mb of ram in this case, adding 10 more if I open the fullsized view. So I guessed as well most devices are able to display this view in full size. If it doesn't work on low-end devices my plan was to add an option in the app preferences to let user chop this view according to some database values. And a last reason is that during most of this activity "life-cycle" (the app has pics that are relevant to the view, when it ends the value that selects which pictures are displayed has to change and no more pictures are added inside this instance of this activity); the view will be unpopulated, so most of the time showing everything wont cost much, just at the end of its cycle That was more or less what I thought at the time i created this layout. I'm open to any sort of suggestion or opinion, I just created this layout a few days ago and I'm trying to see if it can work right, because it suits my app needs. Though if there is a better way i would love to hear it Thanks Mateo

    Read the article

  • JavaFx 2.1, 2.2 TableView update issue

    - by Lewis Liu
    My application uses JPA read data into TableView then modify and display them. The table refreshed modified record under JavaFx 2.0.3. Under JavaFx 2.1, 2.2, the table wouldn't refresh the update anymore. I found other people have similar issue. My plan was to continue using 2.0.3 until someone fixes the issue under 2.1 and 2.2. Now I know it is not a bug and wouldn't be fixed. Well, I don't know how to deal with this. Following are codes are modified from sample demo to show the issue. If I add a new record or delete a old record from table, table refreshes fine. If I modify a record, the table wouldn't refreshes the change until a add, delete or sort action is taken. If I remove the modified record and add it again, table refreshes. But the modified record is put at button of table. Well, if I remove the modified record, add the same record then move the record to the original spot, the table wouldn't refresh anymore. Below is a completely code, please shine some light on this. import javafx.application.Application; import javafx.beans.property.SimpleStringProperty; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.HPos; import javafx.geometry.Insets; import javafx.geometry.Pos; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.*; import javafx.scene.control.cell.PropertyValueFactory; import javafx.scene.layout.GridPane; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.scene.text.Font; import javafx.stage.Modality; import javafx.stage.Stage; import javafx.stage.StageStyle; public class Main extends Application { private TextField firtNameField = new TextField(); private TextField lastNameField = new TextField(); private TextField emailField = new TextField(); private Stage editView; private Person fPerson; public static class Person { private final SimpleStringProperty firstName; private final SimpleStringProperty lastName; private final SimpleStringProperty email; private Person(String fName, String lName, String email) { this.firstName = new SimpleStringProperty(fName); this.lastName = new SimpleStringProperty(lName); this.email = new SimpleStringProperty(email); } public String getFirstName() { return firstName.get(); } public void setFirstName(String fName) { firstName.set(fName); } public String getLastName() { return lastName.get(); } public void setLastName(String fName) { lastName.set(fName); } public String getEmail() { return email.get(); } public void setEmail(String fName) { email.set(fName); } } private TableView<Person> table = new TableView<Person>(); private final ObservableList<Person> data = FXCollections.observableArrayList( new Person("Jacob", "Smith", "[email protected]"), new Person("Isabella", "Johnson", "[email protected]"), new Person("Ethan", "Williams", "[email protected]"), new Person("Emma", "Jones", "[email protected]"), new Person("Michael", "Brown", "[email protected]")); public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) { Scene scene = new Scene(new Group()); stage.setTitle("Table View Sample"); stage.setWidth(535); stage.setHeight(535); editView = new Stage(); final Label label = new Label("Address Book"); label.setFont(new Font("Arial", 20)); TableColumn firstNameCol = new TableColumn("First Name"); firstNameCol.setCellValueFactory( new PropertyValueFactory<Person, String>("firstName")); firstNameCol.setMinWidth(150); TableColumn lastNameCol = new TableColumn("Last Name"); lastNameCol.setCellValueFactory( new PropertyValueFactory<Person, String>("lastName")); lastNameCol.setMinWidth(150); TableColumn emailCol = new TableColumn("Email"); emailCol.setMinWidth(200); emailCol.setCellValueFactory( new PropertyValueFactory<Person, String>("email")); table.setItems(data); table.getColumns().addAll(firstNameCol, lastNameCol, emailCol); //--- create a edit button and a editPane to edit person Button addButton = new Button("Add"); addButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { fPerson = null; firtNameField.setText(""); lastNameField.setText(""); emailField.setText(""); editView.show(); } }); Button editButton = new Button("Edit"); editButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (table.getSelectionModel().getSelectedItem() != null) { fPerson = table.getSelectionModel().getSelectedItem(); firtNameField.setText(fPerson.getFirstName()); lastNameField.setText(fPerson.getLastName()); emailField.setText(fPerson.getEmail()); editView.show(); } } }); Button deleteButton = new Button("Delete"); deleteButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (table.getSelectionModel().getSelectedItem() != null) { data.remove(table.getSelectionModel().getSelectedItem()); } } }); HBox addEditDeleteButtonBox = new HBox(); addEditDeleteButtonBox.getChildren().addAll(addButton, editButton, deleteButton); addEditDeleteButtonBox.setAlignment(Pos.CENTER_RIGHT); addEditDeleteButtonBox.setSpacing(3); GridPane editPane = new GridPane(); editPane.getStyleClass().add("editView"); editPane.setPadding(new Insets(3)); editPane.setHgap(5); editPane.setVgap(5); Label personLbl = new Label("Person:"); editPane.add(personLbl, 0, 1); GridPane.setHalignment(personLbl, HPos.LEFT); firtNameField.setPrefWidth(250); lastNameField.setPrefWidth(250); emailField.setPrefWidth(250); Label firstNameLabel = new Label("First Name:"); Label lastNameLabel = new Label("Last Name:"); Label emailLabel = new Label("Email:"); editPane.add(firstNameLabel, 0, 3); editPane.add(firtNameField, 1, 3); editPane.add(lastNameLabel, 0, 4); editPane.add(lastNameField, 1, 4); editPane.add(emailLabel, 0, 5); editPane.add(emailField, 1, 5); GridPane.setHalignment(firstNameLabel, HPos.RIGHT); GridPane.setHalignment(lastNameLabel, HPos.RIGHT); GridPane.setHalignment(emailLabel, HPos.RIGHT); Button saveButton = new Button("Save"); saveButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (fPerson == null) { fPerson = new Person( firtNameField.getText(), lastNameField.getText(), emailField.getText()); data.add(fPerson); } else { int k = -1; if (data.size() > 0) { for (int i = 0; i < data.size(); i++) { if (data.get(i) == fPerson) { k = i; } } } fPerson.setFirstName(firtNameField.getText()); fPerson.setLastName(lastNameField.getText()); fPerson.setEmail(emailField.getText()); data.set(k, fPerson); table.setItems(data); // The following will work, but edited person has to be added to the button // // data.remove(fPerson); // data.add(fPerson); // add and remove refresh the table, but now move edited person to original spot, // it failed again with the following code // while (data.indexOf(fPerson) != k) { // int i = data.indexOf(fPerson); // Collections.swap(data, i, i - 1); // } } editView.close(); } }); Button cancelButton = new Button("Cancel"); cancelButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { editView.close(); } }); HBox saveCancelButtonBox = new HBox(); saveCancelButtonBox.getChildren().addAll(saveButton, cancelButton); saveCancelButtonBox.setAlignment(Pos.CENTER_RIGHT); saveCancelButtonBox.setSpacing(3); VBox editBox = new VBox(); editBox.getChildren().addAll(editPane, saveCancelButtonBox); Scene editScene = new Scene(editBox); editView.setTitle("Person"); editView.initStyle(StageStyle.UTILITY); editView.initModality(Modality.APPLICATION_MODAL); editView.setScene(editScene); editView.close(); final VBox vbox = new VBox(); vbox.setSpacing(5); vbox.getChildren().addAll(label, table, addEditDeleteButtonBox); vbox.setPadding(new Insets(10, 0, 0, 10)); ((Group) scene.getRoot()).getChildren().addAll(vbox); stage.setScene(scene); stage.show(); } }

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Linux (NAS) Permissions problem (Permission Denied)

    - by calumbrodie
    This is probably easier to show than to explain... -bash-3.2$ id uid=501(admin) gid=503(admin) groups=100(users),501(admins),503(admin) -bash-3.2$ groups admin users admins -bash-3.2$ ls -l total 8 drwxrwxrwx 78 admin www 4096 Dec 9 09:02 Inbox drwxrwxrwx 21 admin www 4096 Dec 8 21:45 Movies drwxrwx--- 3 admin www 52 Dec 9 07:57 TV -bash-3.2$ cd Movies -bash-3.2$ ls -l total 20 drwxrwx--- 7 admin www 4096 Dec 8 00:04 Action drwxrwx--- 6 admin www 4096 Dec 8 00:05 Animation drwxrwx--- 4 admin www 4096 Dec 8 00:17 Comedy drwxrwx--- 4 admin www 4096 Dec 8 00:14 Drama drwxrwx--- 4 admin www 4096 Dec 8 00:14 Family drwxrwx--- 6 admin www 58 Dec 6 19:10 Foreign Language drwxrwx--- 2 admin www 31 Dec 7 23:58 Horror drwxrwx--- 3 admin www 50 Dec 8 00:15 Science Fiction drwxrwx--- 2 admin www 6 Dec 8 00:16 Thriller -bash-3.2$ cd ../Inbox -bash: cd: ../Inbox: Permission denied Filesystem is XFS. Are there permissions on the directories that ls -l wouldn't show? I'm the owner of all directories and files inside them. I can sudo to modify the file permissions or view the contents of the folders but I need them to be accessible by 'admin'. Any ideas? I'll be checking the question regularly so let me know if I need to update this with more information. Thanks Edit : Added strace execve("/bin/ls", ["ls", "Inbox"], [/* 21 vars */]) = 0 brk(0) = 0x26000 uname({sys="Linux", node="axentraserver.the-brodie-stora.mystora.com", ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001c000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=17972, ...}) = 0 mmap2(NULL, 17972, PROT_READ, MAP_PRIVATE, 3, 0) = 0x4001d000 close(3) = 0 open("/lib/librt.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0P\25\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=39776, ...}) = 0 mmap2(NULL, 57816, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40025000 mprotect(0x4002b000, 28672, PROT_NONE) = 0 mmap2(0x40032000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5) = 0x40032000 close(3) = 0 open("/lib/libacl.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\0\24\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=134375, ...}) = 0 mmap2(NULL, 54368, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40034000 mprotect(0x4003a000, 28672, PROT_NONE) = 0 mmap2(0x40041000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5) = 0x40041000 close(3) = 0 open("/lib/libselinux.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\2147\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=297439, ...}) = 0 mmap2(NULL, 117504, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40042000 mprotect(0x40056000, 28672, PROT_NONE) = 0 mmap2(0x4005d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13) = 0x4005d000 close(3) = 0 open("/lib/libgcc_s.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\10\"\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=43164, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40022000 mmap2(NULL, 74572, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x4005f000 mprotect(0x4006a000, 28672, PROT_NONE) = 0 mmap2(0x40071000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xa) = 0x40071000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0XI\1\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1517948, ...}) = 0 mmap2(NULL, 1245628, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40072000 mprotect(0x40195000, 32768, PROT_NONE) = 0 mmap2(0x4019d000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x123) = 0x4019d000 mmap2(0x401a0000, 8636, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x401a0000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\230A\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=121044, ...}) = 0 mmap2(NULL, 115184, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401a3000 mprotect(0x401b5000, 28672, PROT_NONE) = 0 mmap2(0x401bc000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x11) = 0x401bc000 mmap2(0x401be000, 4592, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x401be000 close(3) = 0 open("/lib/libattr.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\364\f\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=40571, ...}) = 0 mmap2(NULL, 45512, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401c0000 mprotect(0x401c3000, 32768, PROT_NONE) = 0 mmap2(0x401cb000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3) = 0x401cb000 close(3) = 0 open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\254\10\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=15344, ...}) = 0 mmap2(NULL, 41116, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401cc000 mprotect(0x401ce000, 28672, PROT_NONE) = 0 mmap2(0x401d5000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0x401d5000 close(3) = 0 open("/lib/libsepol.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\330/\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=228044, ...}) = 0 mmap2(NULL, 301748, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401d7000 mprotect(0x4020f000, 28672, PROT_NONE) = 0 mmap2(0x40216000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x37) = 0x40216000 mmap2(0x40217000, 39604, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x40217000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40221000 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40222000 set_tls(0x40221d00, 0x40221d00, 0x40024000, 0x402223e8, 0x41) = 0 mprotect(0x401d5000, 4096, PROT_READ) = 0 mprotect(0x401bc000, 4096, PROT_READ) = 0 mprotect(0x4019d000, 8192, PROT_READ) = 0 mprotect(0x4005d000, 4096, PROT_READ) = 0 mprotect(0x40032000, 4096, PROT_READ) = 0 mprotect(0x40023000, 4096, PROT_READ) = 0 munmap(0x4001d000, 17972) = 0 set_tid_address(0x402218a8) = 9539 set_robust_list(0x402218b0, 0xc) = 0 rt_sigaction(SIGRTMIN, {0x401a6d90, [], SA_SIGINFO|0x4000000}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x401a6c64, [], SA_RESTART|SA_SIGINFO|0x4000000}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0 brk(0) = 0x26000 brk(0x47000) = 0x47000 open("/proc/mounts", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "rootfs / rootfs rw 0 0\nubi0:root"..., 1024) = 1024 read(3, "fs.xino,noplink,create=mfs,sum,b"..., 1024) = 428 read(3, "", 1024) = 0 close(3) = 0 munmap(0x4001d000, 4096) = 0 access("/etc/selinux/", F_OK) = 0 open("/etc/selinux/config", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=52, ws_col=153, ws_xpixel=918, ws_ypixel=728}) = 0 stat64("Inbox", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1696, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1696 read(3, "", 4096) = 0 close(3) = 0 munmap(0x4001d000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=17972, ...}) = 0 mmap2(NULL, 17972, PROT_READ, MAP_PRIVATE, 3, 0) = 0x4001d000 close(3) = 0 open("/lib/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\304\27\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=49256, ...}) = 0 mmap2(NULL, 70316, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40223000 mprotect(0x4022c000, 28672, PROT_NONE) = 0 mmap2(0x40233000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0x40233000 close(3) = 0 mprotect(0x40233000, 4096, PROT_READ) = 0 munmap(0x4001d000, 17972) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=1661, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1661 close(3) = 0 munmap(0x4001d000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/group", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=700, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "root:x:0:root\nbin:x:1:root,bin,d"..., 4096) = 700 close(3) = 0 munmap(0x4001d000, 4096) = 0 open("Inbox", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = -1 EACCES (Permission denied) write(2, "ls: ", 4ls: ) = 4 write(2, "Inbox", 5Inbox) = 5 write(2, ": Permission denied", 19: Permission denied) = 19 write(2, "\n", 1 ) = 1 close(1) = 0 exit_group(2) = ? 2nd edit: Elaboration for Mike. The Inbox sits at the following location /home/admin/MyLibrary/MyVideos/Inbox /home/admin/MyLibrary/MyVideos/Movies The system is a Netgear Stora NAS box that I have root access to. The /home/ folder is mounted as an smb share on various computers around the house. The folder /Inbox cannot be opened on any of those machines (they all connect as 'admin'). When I ssh into the box using the 'admin' credentials I am also unable to access the folder. The folder was created via a Web Admin page hosted on the NAS. The user/group for the Inbox folder was previously apache:www (expected as this folder was created by the web application), but I chmod/chowned the folder as the root user in an attempt to grant the admin user (therefore the rest of the connected machines) access to the files. Sorry for not including this earlier, I wasn't sure if it was relevant and didn't want to confuse the situation. -Thanks 3rd Edit Sorry again - It looks like this NAS is running some custom version of Red Hat, not Debian as previously stated - I'm not sure if this makes a difference

    Read the article

  • Wireless access point -> Powerline -> Router -> Internet, should this work?

    - by Anthony
    My network at home used to be a laptop and desktop connected wirelessly to a single Wireless ADSL router, a Cisco 877W. Wireless reception around the house with this setup was quite unreliable, so I've gone about looking to improve it. I purchased some Belkin Gigabit powerline adapters and I've got these working fine. I can hook a computer up to one of the powerline adapters, and with the other one plugged into the ADSL router the computer has internet access. Additionally I can hook a Netgear DG834G Wireless ADSL router into it with the adsl not plugged in, and after turning off DHCP can RJ45 a computer up to the network. Everything works fine. However, if I setup a wireless network on the Netgear then any computer that connects wirelessly to it cannot access the internet. It gets an IP address very slowly via DHCP which is a good one, but it cannot access the internet. It can however communicate with the RJ45'd computer also connected to the Netgear. I wondered whether this could be a problem with the Netgear so I've borrowed a Cisco Aironet 1200 and got this working fine when it's attached directly to the primary ADSL router. I can connect to it wireless and get onto the internet. However, if I then plug it into the Netgear I can communicate with other devices attached to the Netgear, but can't get any further than the Netgear. All the while though the other devices RJ45'd to the Netgear are communicating with the internet just fine. I'm starting to suspect it's one of two things causing the problem: 1) For some reason the belkin powerline adapters don't like carrying wireless-originating signals. Could this be possible? 2) The primary Cisco ADSL router doesn't want to communicate with other devices on my network more than one hop away from it. I'm making an assumption here that within the Netgear box the wireless and wired sides are handled differently. Could this be true? Has anyone successfully setup something similar to what I'm trying, with a wireless device on the otherside of a pair of powerline connectors? Update 06/07/2010 - Response to irrational John 28 June Thanks for the answer John - and for clearing up some of my questions. The model number of the belkin powerline adapters are F5D4076. Security was apparently enabled by default on them, and I didn't change them from their default setting. The network diagram in your answer shows exactly what I'm trying to setup: I've followed that guide and I'm still not able to get things working properly. The thing that perplexes me is that wired network traffic works just fine - it's only the wireless traffic that doesn't. This is with the same laptop, and the same DHCP or static IPs. "1. What IP addresses did you assign to each router? What subnet masks are you using?" - subnet is 255.255.255.0, the router connected to the adsl is 192.168.153.1 and that has the DHCP server. The access point on the other side of the powerline adapters I've tried both a static IP of 192.168.153.110, same subnet, and a DHCP-assigned IP. The other devices are DHCP, although I also tried manually entering IP settings. "2. Have you correctly enabled DHCP on only one of the routers and disabled it on all the others?" Yes I have - only the internet-connected router has DHCP enabled. The IP range for the DHCP is from 192.168.153.11 - 192.168.153.200. The strange thing is that wired connections work fine on the LAN, plugged into any router, work fine - it's only the wireless connections that aren't working when they're plugged into the non-primary AP. "Since the routers you are using appear to integrate an ADSL modem I'm assuming there is no WAN port on them." There's no NAT within the LAN, and all wired connections are connected to LAN ports. It's something wrong with the wireless - wired works fine throughout the whole LAN. Update 06/07/2010 - Response to irrational John 29 June The diagram you've drawn in your answer shows pretty much exactly what I'm trying to do. I've spent another evening trying different things and made some progress but I'm still scratching my head. I've borrowed a Netgear access point and been trying with this, and the strange thing is that my PC is working now - this is a Windows 7 PC connected to the access point in the position of where the DG834G is in the diagram. Meanwhile, however, I have an old Powerbook G4 12" I use for music, and while that has a DHCP-assigned IP address, it's not getting any network throughput to either LAN or internet addresses. To make matters more strange, my phone appears to be intermittently working when it's on the wifi. The access point is a Netgear WPN802v1, DHCP, NAT both switched off, running firmware 2.0.9.0. Last night I set it up with exactly the same settings, and similar to tonight I could get a couple of devices to work, and a couple not to. By the morning, however, everything had stopped working - nothing could get a DHCP IP address. I rebooted the 877W earlier this evening and I'm wondering whether this is why a few things are working now. "Could it be possible that the issue could be with the 877W?" I didn't configure this - is it possible that the DHCP server only likes assigning devices that are immediately attached to it? Or similar, could a firewall be stopping too many addresses that are coming through one device? (ie. the Access Point) This could explain why devices are working at the start but then not by the end. In reply to your questions, "1. I looked at the Netgear DG834G support page. There are five versions of this router. Which version do you have? Netgear usually lists this on the label on the bottom of the router. What version of the firmware does it have?" It's a DG834Gv3, and the firmware is the last on the netgear site version 4.01.40. "3. Not knowing which version you have, I glanced at the reference manual for the DG834G v3. In the section for Wireless Settings under the subsection Wireless Access Point there is a check box for a Wireless Isolation setting. If you have this setting it should be off/unchecked. If it is checked then any device connected via wireless would not be able to talk to any other device on the LAN. This sounds like your problem so maybe this is the cause?" I've checked this and it's switched off. I've made a change to the IP of the access point to something outside the DHCP range - it's now 192.158.153.5, with DHCP starting at 11 and going up to 254. Thanks for the tip about this - I only have a few devices so wouldn't anticipate the DHCP server assigning up to 110, but better safe than sorry. Finally one more thing I thought I should add, is with the Powerbook G4 that's not working - it's getting a DHCP IP address and it can communicate with the WPN802 as I can visit the administration page. Anything further than this, however, it can't reach; I can't administrate the 192.168.153.1 (877W router). Strangely, however, when I open Finder on the same powerbook it's detecting my NAS which is attached directly via wire to the 877W. If I try to browse it, it says connection failed. RE: "Perhaps the problem with your Powerbook is with DNS?.." The IP settings on the powerbook are identical to that of the PC with the exception of the IP address; the PC is 192.168.153.17 and the powerbook is 192.168.153.12. Subnets are the same, 255.255.255.0 and default gateway is the same, .1, and the DNS servers are the same. I administrate the 877W by going to 192.168.153.1 in the browser. This is what isn't working from the Powerbook, despite the PC working fine when I do the same. Meanwhile, however, I can administrate the AP on 192.168.153.5 from both PC and Powerbook Update 06/07/2010 - FINAL RESOLUTION of sorts: First off, sorry for the length of this question. I need start to practice a more concise writing style, so I'm going to try to keep this bit brief. After much fiddling, and with the hugely-appreciated help of irrational John, I have come to the conclusion that it's something wrong with the powerbook. I believe that this was perhaps the reason I doubted things worked at the very beginning. I now have the original DG834Gv3 running both wirelessly and wired, and both wired devices and wireless devices get internet connectivity. The only anomaly is the powerbook which I've had to keep wired, as no matter what I do it refuses to work wirelessly. I still have suspicions that the 877W isn't quite right; I'm fairly sure that if I RJ45 the powerline adapter into a different LAN port on it then everything will break. I've just about run out of patience to test this further, and I think I need to go into the 877W's config to match the 877w's lan port's settings. I'm accepting irrational John's answer as he's been enormously helpful, way above the call of duty, and for this line he wrote: Beats the heck out of me. which in the midst of great frustration made me chuckle, and for a sentence in one of his comments to the same answer: If it is specific to the Powerbook I would put that issue aside until after you feel you have the rest of your LAN and the additional WAP all working together correctlyt It was this second sentence that made me put the powerbook aside and concentrate on the other devices that ultimately led me to getting things working.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Is there a Math.atan2 substitute for j2ME? Blackberry development

    - by Kai
    I have a wide variety of locations stored in my persistent object that contain latitudes and longitudes in double(43.7389, 7.42577) format. I need to be able to grab the user's latitude and longitude and select all items within, say 1 mile. Walking distance. I have done this in PHP so I snagged my PHP code and transferred it to Java, where everything plugged in fine until I figured out J2ME doesn't support atan2(double, double). So, after some searching, I find a small snippet of code that is supposed to be a substitute for atan2. Here is the code: public double atan2(double y, double x) { double coeff_1 = Math.PI / 4d; double coeff_2 = 3d * coeff_1; double abs_y = Math.abs(y)+ 1e-10f; double r, angle; if (x >= 0d) { r = (x - abs_y) / (x + abs_y); angle = coeff_1; } else { r = (x + abs_y) / (abs_y - x); angle = coeff_2; } angle += (0.1963f * r * r - 0.9817f) * r; return y < 0.0f ? -angle : angle; } I am getting odd results from this. My min and max latitude and longitudes are coming back as incredibly low numbers that can't possibly be right. Like 0.003785746 when I am expecting something closer to the original lat and long values (43.7389, 7.42577). Since I am no master of advanced math, I don't really know what to look for here. Perhaps someone else may have an answer. Here is my complete code: package store_finder; import java.util.Vector; import javax.microedition.location.Criteria; import javax.microedition.location.Location; import javax.microedition.location.LocationException; import javax.microedition.location.LocationListener; import javax.microedition.location.LocationProvider; import javax.microedition.location.QualifiedCoordinates; import net.rim.blackberry.api.invoke.Invoke; import net.rim.blackberry.api.invoke.MapsArguments; import net.rim.device.api.system.Bitmap; import net.rim.device.api.system.Display; import net.rim.device.api.ui.Color; import net.rim.device.api.ui.Field; import net.rim.device.api.ui.Graphics; import net.rim.device.api.ui.Manager; import net.rim.device.api.ui.component.BitmapField; import net.rim.device.api.ui.component.RichTextField; import net.rim.device.api.ui.component.SeparatorField; import net.rim.device.api.ui.container.HorizontalFieldManager; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.container.VerticalFieldManager; public class nearBy extends MainScreen { private HorizontalFieldManager _top; private VerticalFieldManager _middle; private int horizontalOffset; private final static long animationTime = 300; private long animationStart = 0; private double latitude = 43.7389; private double longitude = 7.42577; private int _interval = -1; private double max_lat; private double min_lat; private double max_lon; private double min_lon; private double latitude_in_degrees; private double longitude_in_degrees; public nearBy() { super(); horizontalOffset = Display.getWidth(); _top = new HorizontalFieldManager(Manager.USE_ALL_WIDTH | Field.FIELD_HCENTER) { public void paint(Graphics gr) { Bitmap bg = Bitmap.getBitmapResource("bg.png"); gr.drawBitmap(0, 0, Display.getWidth(), Display.getHeight(), bg, 0, 0); subpaint(gr); } }; _middle = new VerticalFieldManager() { public void paint(Graphics graphics) { graphics.setBackgroundColor(0xFFFFFF); graphics.setColor(Color.BLACK); graphics.clear(); super.paint(graphics); } protected void sublayout(int maxWidth, int maxHeight) { int displayWidth = Display.getWidth(); int displayHeight = Display.getHeight(); super.sublayout( displayWidth, displayHeight); setExtent( displayWidth, displayHeight); } }; add(_top); add(_middle); Bitmap lol = Bitmap.getBitmapResource("logo.png"); BitmapField lolfield = new BitmapField(lol); _top.add(lolfield); Criteria cr= new Criteria(); cr.setCostAllowed(true); cr.setPreferredResponseTime(60); cr.setHorizontalAccuracy(5000); cr.setVerticalAccuracy(5000); cr.setAltitudeRequired(true); cr.isSpeedAndCourseRequired(); cr.isAddressInfoRequired(); try{ LocationProvider lp = LocationProvider.getInstance(cr); if( lp!=null ){ lp.setLocationListener(new LocationListenerImpl(), _interval, 1, 1); } } catch(LocationException le) { add(new RichTextField("Location exception "+le)); } //_middle.add(new RichTextField("this is a map " + Double.toString(latitude) + " " + Double.toString(longitude))); int lat = (int) (latitude * 100000); int lon = (int) (longitude * 100000); String document = "<location-document>" + "<location lon='" + lon + "' lat='" + lat + "' label='You are here' description='You' zoom='0' />" + "<location lon='742733' lat='4373930' label='Hotel de Paris' description='Hotel de Paris' address='Palace du Casino' postalCode='98000' phone='37798063000' zoom='0' />" + "</location-document>"; // Invoke.invokeApplication(Invoke.APP_TYPE_MAPS, new MapsArguments( MapsArguments.ARG_LOCATION_DOCUMENT, document)); _middle.add(new SeparatorField()); surroundingVenues(); _middle.add(new RichTextField("max lat: " + max_lat)); _middle.add(new RichTextField("min lat: " + min_lat)); _middle.add(new RichTextField("max lon: " + max_lon)); _middle.add(new RichTextField("min lon: " + min_lon)); } private void surroundingVenues() { double point_1_latitude_in_degrees = latitude; double point_1_longitude_in_degrees= longitude; // diagonal distance + error margin double distance_in_miles = (5 * 1.90359441) + 10; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, 45); double lat_limit_1 = latitude_in_degrees; double lon_limit_1 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, 135); double lat_limit_2 = latitude_in_degrees; double lon_limit_2 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, -135); double lat_limit_3 = latitude_in_degrees; double lon_limit_3 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, -45); double lat_limit_4 = latitude_in_degrees; double lon_limit_4 = longitude_in_degrees; double mx1 = Math.max(lat_limit_1, lat_limit_2); double mx2 = Math.max(lat_limit_3, lat_limit_4); max_lat = Math.max(mx1, mx2); double mm1 = Math.min(lat_limit_1, lat_limit_2); double mm2 = Math.min(lat_limit_3, lat_limit_4); min_lat = Math.max(mm1, mm2); double mlon1 = Math.max(lon_limit_1, lon_limit_2); double mlon2 = Math.max(lon_limit_3, lon_limit_4); max_lon = Math.max(mlon1, mlon2); double minl1 = Math.min(lon_limit_1, lon_limit_2); double minl2 = Math.min(lon_limit_3, lon_limit_4); min_lon = Math.max(minl1, minl2); //$qry = "SELECT DISTINCT zip.zipcode, zip.latitude, zip.longitude, sg_stores.* FROM zip JOIN store_finder AS sg_stores ON sg_stores.zip=zip.zipcode WHERE zip.latitude<=$lat_limit_max AND zip.latitude>=$lat_limit_min AND zip.longitude<=$lon_limit_max AND zip.longitude>=$lon_limit_min"; } private void getCords(double point_1_latitude, double point_1_longitude, double distance, int degs) { double m_EquatorialRadiusInMeters = 6366564.86; double m_Flattening=0; double distance_in_meters = distance * 1609.344 ; double direction_in_radians = Math.toRadians( degs ); double eps = 0.000000000000005; double r = 1.0 - m_Flattening; double point_1_latitude_in_radians = Math.toRadians( point_1_latitude ); double point_1_longitude_in_radians = Math.toRadians( point_1_longitude ); double tangent_u = (r * Math.sin( point_1_latitude_in_radians ) ) / Math.cos( point_1_latitude_in_radians ); double sine_of_direction = Math.sin( direction_in_radians ); double cosine_of_direction = Math.cos( direction_in_radians ); double heading_from_point_2_to_point_1_in_radians = 0.0; if ( cosine_of_direction != 0.0 ) { heading_from_point_2_to_point_1_in_radians = atan2( tangent_u, cosine_of_direction ) * 2.0; } double cu = 1.0 / Math.sqrt( ( tangent_u * tangent_u ) + 1.0 ); double su = tangent_u * cu; double sa = cu * sine_of_direction; double c2a = ( (-sa) * sa ) + 1.0; double x= Math.sqrt( ( ( ( 1.0 /r /r ) - 1.0 ) * c2a ) + 1.0 ) + 1.0; x= (x- 2.0 ) / x; double c= 1.0 - x; c= ( ( (x * x) / 4.0 ) + 1.0 ) / c; double d= ( ( 0.375 * (x * x) ) -1.0 ) * x; tangent_u = distance_in_meters /r / m_EquatorialRadiusInMeters /c; double y= tangent_u; boolean exit_loop = false; double cosine_of_y = 0.0; double cz = 0.0; double e = 0.0; double term_1 = 0.0; double term_2 = 0.0; double term_3 = 0.0; double sine_of_y = 0.0; while( exit_loop != true ) { sine_of_y = Math.sin(y); cosine_of_y = Math.cos(y); cz = Math.cos( heading_from_point_2_to_point_1_in_radians + y); e = (cz * cz * 2.0 ) - 1.0; c = y; x = e * cosine_of_y; y = (e + e) - 1.0; term_1 = ( sine_of_y * sine_of_y * 4.0 ) - 3.0; term_2 = ( ( term_1 * y * cz * d) / 6.0 ) + x; term_3 = ( ( term_2 * d) / 4.0 ) -cz; y= ( term_3 * sine_of_y * d) + tangent_u; if ( Math.abs(y - c) > eps ) { exit_loop = false; } else { exit_loop = true; } } heading_from_point_2_to_point_1_in_radians = ( cu * cosine_of_y * cosine_of_direction ) - ( su * sine_of_y ); c = r * Math.sqrt( ( sa * sa ) + ( heading_from_point_2_to_point_1_in_radians * heading_from_point_2_to_point_1_in_radians ) ); d = ( su * cosine_of_y ) + ( cu * sine_of_y * cosine_of_direction ); double point_2_latitude_in_radians = atan2(d, c); c = ( cu * cosine_of_y ) - ( su * sine_of_y * cosine_of_direction ); x = atan2( sine_of_y * sine_of_direction, c); c = ( ( ( ( ( -3.0 * c2a ) + 4.0 ) * m_Flattening ) + 4.0 ) * c2a * m_Flattening ) / 16.0; d = ( ( ( (e * cosine_of_y * c) + cz ) * sine_of_y * c) + y) * sa; double point_2_longitude_in_radians = ( point_1_longitude_in_radians + x) - ( ( 1.0 - c) * d * m_Flattening ); heading_from_point_2_to_point_1_in_radians = atan2( sa, heading_from_point_2_to_point_1_in_radians ) + Math.PI; latitude_in_degrees = Math.toRadians( point_2_latitude_in_radians ); longitude_in_degrees = Math.toRadians( point_2_longitude_in_radians ); } public double atan2(double y, double x) { double coeff_1 = Math.PI / 4d; double coeff_2 = 3d * coeff_1; double abs_y = Math.abs(y)+ 1e-10f; double r, angle; if (x >= 0d) { r = (x - abs_y) / (x + abs_y); angle = coeff_1; } else { r = (x + abs_y) / (abs_y - x); angle = coeff_2; } angle += (0.1963f * r * r - 0.9817f) * r; return y < 0.0f ? -angle : angle; } private Vector fetchVenues(double max_lat, double min_lat, double max_lon, double min_lon) { return new Vector(); } private class LocationListenerImpl implements LocationListener { public void locationUpdated(LocationProvider provider, Location location) { if(location.isValid()) { nearBy.this.longitude = location.getQualifiedCoordinates().getLongitude(); nearBy.this.latitude = location.getQualifiedCoordinates().getLatitude(); //double altitude = location.getQualifiedCoordinates().getAltitude(); //float speed = location.getSpeed(); } } public void providerStateChanged(LocationProvider provider, int newState) { // MUST implement this. Should probably do something useful with it as well. } } } please excuse the mess. I have the user lat long hard coded since I do not have GPS functional yet. You can see the SQL query commented out to know how I plan on using the min and max lat and long values. Any help is appreciated. Thanks

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How can I create a weekly calendar view for an Android Honeycomb application?

    - by BVB
    I am working on an Android (v3.0) application that has a requirement of mimicking the weekly calendar layout found on Google Calendar: The events will be based on external requests through the Google Calendar API (I already have this part working). Using the API, I can obtain a list of events for the week, with each event having a starting and and ending datetime. I would like to use this data to show the scheduled events to the application's users in a view similar to the one above. Here's what I have so far: The XML appears below: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="800dp" android:layout_height="match_parent" android:orientation="vertical" > <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Calendar Title" android:textAppearance="?android:attr/textAppearanceLarge" /> <RelativeLayout android:id="@+id/relativeLayout1" android:layout_width="match_parent" android:layout_height="wrap_content" > <LinearLayout android:id="@+id/linearLayout1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_alignParentRight="true" android:layout_alignParentTop="true" > <TextView android:id="@+id/textView2" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="" /> <TextView android:id="@+id/textView3" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Sunday" /> <TextView android:id="@+id/textView4" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Monday" /> <TextView android:id="@+id/textView5" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Tuesday" /> <TextView android:id="@+id/textView6" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Wednesday" /> <TextView android:id="@+id/textView7" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Thursday" /> <TextView android:id="@+id/textView8" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Friday" /> <TextView android:id="@+id/textView9" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="2" android:gravity="center" android:text="Saturday" /> </LinearLayout> </RelativeLayout> <ScrollView android:id="@+id/scrollView1" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="0dp" android:scrollbars="none" >" <RelativeLayout android:id="@+id/relativeLayout242" android:layout_width="match_parent" android:layout_height="wrap_content" android:padding="0dp" > <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="0dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="40dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="80dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="120dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="160dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="200dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="240dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="280dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="320dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="360dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="400dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="440dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="480dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="520dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="560dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="600dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="640dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="680dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="720dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="760dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="800dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="840dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="880dp"/> <View android:background="#aaa" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="920dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="20dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="60dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="100dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="140dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="180dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="220dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="260dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="300dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="340dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="380dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="420dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="460dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="500dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="540dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="580dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="620dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="660dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="700dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="740dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="780dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="820dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="860dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="900dp"/> <View android:background="#777" android:layout_width = "fill_parent" android:layout_height="1dp" android:layout_marginTop="940dp"/> <LinearLayout android:id="@+id/linearLayout2" android:layout_width="match_parent" android:layout_height="wrap_content" android:padding="0dp" > <RelativeLayout android:id="@+id/relativeLayout2" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" android:padding="0dp" > <View android:background="#aaa" android:layout_width = "1dp" android:layout_height="fill_parent" android:layout_alignParentRight="true"/> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="0dp" android:gravity="center" android:text="12am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="40dp" android:gravity="center" android:text="1am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="80dp" android:gravity="center" android:text="2am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="120dp" android:gravity="center" android:text="3am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="160dp" android:gravity="center" android:text="4am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="200dp" android:gravity="center" android:text="5am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="240dp" android:gravity="center" android:text="6am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="280dp" android:gravity="center" android:text="7am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="320dp" android:gravity="center" android:text="8am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="360dp" android:gravity="center" android:text="9am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="400dp" android:gravity="center" android:text="10am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="440dp" android:gravity="center" android:text="11am" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="480dp" android:gravity="center" android:text="12pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="520dp" android:gravity="center" android:text="1pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="560dp" android:gravity="center" android:text="2pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="600dp" android:gravity="center" android:text="3pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="640dp" android:gravity="center" android:text="4pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="680dp" android:gravity="center" android:text="5pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="720dp" android:gravity="center" android:text="6pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="760dp" android:gravity="center" android:text="7pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="800dp" android:gravity="center" android:text="8pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="840dp" android:gravity="center" android:text="9pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="880dp" android:gravity="center" android:text="10pm" /> <TextView android:id="@+id/textView10" android:layout_width="match_parent" android:layout_height="40dp" android:layout_marginTop="920dp" android:gravity="center|top" android:text="11pm" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout3" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="14" android:padding="0dp" > <LinearLayout android:id="@+id/linearLayout3" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_alignParentRight="true" android:layout_alignParentTop="true" android:padding="0dp" > <RelativeLayout android:id="@+id/relativeLayout4" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="180dp" android:layout_marginTop="180dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="180dp" android:layout_marginTop="180dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout5" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="180dp" android:layout_marginTop="280dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="180dp" android:layout_marginTop="280dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout6" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="60dp" android:layout_marginTop="40dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="60dp" android:layout_marginTop="40dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout7" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="90dp" android:layout_marginTop="60dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="90dp" android:layout_marginTop="60dp" android:text="Some Event" /> <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="120dp" android:layout_marginTop="340dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="120dp" android:layout_marginTop="340dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout8" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="180dp" android:layout_marginTop="380dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="180dp" android:layout_marginTop="380dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout9" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="180dp" android:layout_marginTop="480dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="180dp" android:layout_marginTop="480dp" android:text="Some Event" /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeLayout10" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" > <View android:background="#00f" android:layout_width = "fill_parent" android:layout_height="180dp" android:layout_marginTop="340dp"/> <Button android:id="@+id/button1" android:layout_width="fill_parent" android:layout_height="180dp" android:layout_marginTop="340dp" android:text="Some Event" /> </RelativeLayout> </LinearLayout> </RelativeLayout> </LinearLayout> </RelativeLayout> </ScrollView> </LinearLayout> My approach was to make 40dp equal to 1 hr of time. Thus, whenever I would like to add an event that has a duration of 1.5 hours, I will make an 60dp button that I will place at the exact location that the time begins (12am = 0dp from the top, 1pm = 40dp from the top, 2pm = 80d from the top, etc). My questions are: Is there a better way of doing this? How can I convert my XML to be stand-alone view that could be added to any Android project? (I plan on perhaps making a blog post about the end product) Thank you!

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • .NET file Decryption - Bad Data

    - by Jon
    I am in the process of rewriting an old application. The old app stored data in a scoreboard file that was encrypted with the following code: private const String SSecretKey = @"?B?n?Mj?"; public DataTable GetScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardLocation); if (!f.Exists) { return setupNewScoreBoard(); } DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } This works fine. I have copied the code into my new app so that I can create a legacy loader and convert the data into the new format. The problem is I get a "Bad Data" error: System.Security.Cryptography.CryptographicException was unhandled Message="Bad Data.\r\n" Source="mscorlib" The error fires at this line: dTable.ReadXml(new StreamReader(cryptostreamDecr)); The encrypted file was created today on the same machine with the old code. I guess that maybe the encryption / decryption process uses the application name / file or something and therefore means I can not open it. Does anyone have an idea as to: A) Be able explain why this isn't working? B) Offer a solution that would allow me to be able to open files that were created with the legacy application and be able to convert them please? Here is the whole class that deals with loading and saving the scoreboard: using System; using System.Collections.Generic; using System.Text; using System.Security.Cryptography; using System.Runtime.InteropServices; using System.IO; using System.Data; using System.Xml; using System.Threading; namespace JawBreaker { [Serializable] class ScoreBoardLoader { private Jawbreaker jawbreaker; private String sSecretKey = @"?B?n?Mj?"; private String scoreBoardFileLocation = ""; private bool keepScoreBoardUpdated = true; private int intTimer = 180000; public ScoreBoardLoader(Jawbreaker jawbreaker, String scoreBoardFileLocation) { this.jawbreaker = jawbreaker; this.scoreBoardFileLocation = scoreBoardFileLocation; } // Call this function to remove the key from memory after use for security [System.Runtime.InteropServices.DllImport("KERNEL32.DLL", EntryPoint = "RtlZeroMemory")] public static extern bool ZeroMemory(IntPtr Destination, int Length); // Function to Generate a 64 bits Key. private string GenerateKey() { // Create an instance of Symetric Algorithm. Key and IV is generated automatically. DESCryptoServiceProvider desCrypto = (DESCryptoServiceProvider)DESCryptoServiceProvider.Create(); // Use the Automatically generated key for Encryption. return ASCIIEncoding.ASCII.GetString(desCrypto.Key); } public void writeScoreboardToFile() { DataTable tempScoreBoard = getScoreboardFromFile(); //add in the new scores to the end of the file. for (int i = 0; i < jawbreaker.Scoreboard.Rows.Count; i++) { DataRow row = tempScoreBoard.NewRow(); row.ItemArray = jawbreaker.Scoreboard.Rows[i].ItemArray; tempScoreBoard.Rows.Add(row); } //before it is written back to the file make sure we update the sync info if (jawbreaker.SyncScoreboard) { //connect to webservice, login and update all the scores that have not been synced. for (int i = 0; i < tempScoreBoard.Rows.Count; i++) { try { //check to see if that row has been synced to the server if (!Boolean.Parse(tempScoreBoard.Rows[i].ItemArray[7].ToString())) { //sync info to server //update the row to say that it has been updated object[] tempArray = tempScoreBoard.Rows[i].ItemArray; tempArray[7] = true; tempScoreBoard.Rows[i].ItemArray = tempArray; tempScoreBoard.AcceptChanges(); } } catch (Exception ex) { jawbreaker.writeErrorToLog("ERROR OCCURED DURING SYNC TO SERVER UPDATE: " + ex.Message); } } } FileStream fsEncrypted = new FileStream(scoreBoardFileLocation, FileMode.Create, FileAccess.Write); DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); DES.Key = ASCIIEncoding.ASCII.GetBytes(sSecretKey); DES.IV = ASCIIEncoding.ASCII.GetBytes(sSecretKey); ICryptoTransform desencrypt = DES.CreateEncryptor(); CryptoStream cryptostream = new CryptoStream(fsEncrypted, desencrypt, CryptoStreamMode.Write); MemoryStream ms = new MemoryStream(); tempScoreBoard.WriteXml(ms, XmlWriteMode.WriteSchema); ms.Position = 0; byte[] bitarray = new byte[ms.Length]; ms.Read(bitarray, 0, bitarray.Length); cryptostream.Write(bitarray, 0, bitarray.Length); cryptostream.Close(); ms.Close(); //now the scores have been added to the file remove them from the datatable jawbreaker.Scoreboard.Rows.Clear(); } public void startPeriodicScoreboardWriteToFile() { while (keepScoreBoardUpdated) { //three minute sleep. Thread.Sleep(intTimer); writeScoreboardToFile(); } } public void stopPeriodicScoreboardWriteToFile() { keepScoreBoardUpdated = false; } public int IntTimer { get { return intTimer; } set { intTimer = value; } } public DataTable getScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardFileLocation); if (!f.Exists) { jawbreaker.writeInfoToLog("Scoreboard not there so creating new one"); return setupNewScoreBoard(); } else { DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(sSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(sSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardFileLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } } public DataTable setupNewScoreBoard() { //scoreboard info into dataset DataTable scoreboard = new DataTable("scoreboard"); scoreboard.Columns.Add(new DataColumn("playername", System.Type.GetType("System.String"))); scoreboard.Columns.Add(new DataColumn("score", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("ballnumber", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("xsize", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("ysize", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("gametype", System.Type.GetType("System.String"))); scoreboard.Columns.Add(new DataColumn("date", System.Type.GetType("System.DateTime"))); scoreboard.Columns.Add(new DataColumn("synced", System.Type.GetType("System.Boolean"))); scoreboard.AcceptChanges(); return scoreboard; } private void Run() { // For additional security Pin the key. GCHandle gch = GCHandle.Alloc(sSecretKey, GCHandleType.Pinned); // Remove the Key from memory. ZeroMemory(gch.AddrOfPinnedObject(), sSecretKey.Length * 2); gch.Free(); } } }

    Read the article

  • Silverlight for Windows Embedded tutorial (step 4)

    - by Valter Minute
    I’m back with my Silverlight for Windows Embedded tutorial. Sorry for the long delay between step 3 and step 4, the MVP summit and some work related issue prevented me from working on the tutorial during the last weeks. In our first,  second and third tutorial steps we implemented some very simple applications, just to understand the basic structure of a Silverlight for Windows Embedded application, learn how to handle events and how to operate on images. In this third step our sample application will be slightly more complicated, to introduce two new topics: list boxes and custom control. We will also learn how to create controls at runtime. I choose to explain those topics together and provide a sample a bit more complicated than usual just to start to give the feeling of how a “real” Silverlight for Windows Embedded application is organized. As usual we can start using Expression Blend to define our main page. In this case we will have a listbox and a textblock. Here’s the XAML code: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ListDemo.Page" Width="640" Height="480" x:Name="ListPage" xmlns:ListDemo="clr-namespace:ListDemo">   <Grid x:Name="LayoutRoot" Background="White"> <ListBox Margin="19,57,19,66" x:Name="FileList" SelectionChanged="Filelist_SelectionChanged"/> <TextBlock Height="35" Margin="19,8,19,0" VerticalAlignment="Top" TextWrapping="Wrap" x:Name="CurrentDir" Text="TextBlock" FontSize="20"/> </Grid> </UserControl> In our listbox we will load a list of directories, starting from the filesystem root (there are no drives in Windows CE, the filesystem has a single root named “\”). When the user clicks on an item inside the list, the corresponding directory path will be displayed in the TextBlock object and the subdirectories of the selected branch will be shown inside the list. As you can see we declared an event handler for the SelectionChanged event of our listbox. We also used a different font size for the TextBlock, to make it more readable. XAML and Expression Blend allow you to customize your UI pretty heavily, experiment with the tools and discover how you can completely change the aspect of your application without changing a single line of code! Inside our ListBox we want to insert the directory presenting a nice icon and their name, just like you are used to see them inside Windows 7 file explorer, for example. To get this we will define a user control. This is a custom object that will behave like “regular” Silverlight for Windows Embedded objects inside our application. First of all we have to define the look of our custom control, named DirectoryItem, using XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="ListDemo.DirectoryItem" Width="500" Height="80">   <StackPanel x:Name="LayoutRoot" Orientation="Horizontal"> <Canvas Width="31.6667" Height="45.9583" Margin="10,10,10,10" RenderTransformOrigin="0.5,0.5"> <Canvas.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform/> <RotateTransform Angle="-31.27"/> <TranslateTransform/> </TransformGroup> </Canvas.RenderTransform> <Rectangle Width="31.6667" Height="45.8414" Canvas.Left="0" Canvas.Top="0.116943" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.569519" Canvas.Top="1.05249" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142632,0.753441" EndPoint="1.01886,0.753441"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142632" CenterY="0.753441" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142632" CenterY="0.753441" Angle="-35.3437"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="2.28036" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="1.34485" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="26.4269" Height="45.8414" Canvas.Left="0.227798" Canvas.Top="0" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="1.25301" Height="45.8414" Canvas.Left="1.70862" Canvas.Top="0.116943" Stretch="Fill" Fill="#FFEBFF07"/> </Canvas> <TextBlock Height="80" x:Name="Name" Width="448" TextWrapping="Wrap" VerticalAlignment="Center" FontSize="24" Text="Directory"/> </StackPanel> </UserControl> As you can see, this XAML contains many graphic elements. Those elements are used to design the folder icon. The original drawing has been designed in Expression Design and then exported as XAML. In Silverlight for Windows Embedded you can use vector images. This means that your images will look good even when scaled or rotated. In our DirectoryItem custom control we have a TextBlock named Name, that will be used to display….(suspense)…. the directory name (I’m too lazy to invent fancy names for controls, and using “boring” intuitive names will make code more readable, I hope!). Now that we have some XAML code, we may execute XAML2CPP to generate part of the aplication code for us. We should then add references to our XAML2CPP generated resource file and include in our code and add a reference to the XAML runtime library to our sources file (you can follow the instruction of the first tutorial step to do that), To generate the code used in this tutorial you need XAML2CPP ver 1.0.1.0, that is downloadable here: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/08/xaml2cpp-1.0.1.0.aspx We can now create our usual simple Win32 application inside Platform Builder, using the same step described in the first chapter of this tutorial (http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2009/10/01/silverlight-for-embedded-tutorial.aspx). We can declare a class for our main page, deriving it from the template that XAML2CPP generated for us: class ListPage : public TListPage<ListPage> { ... } We will see the ListPage class code in a short time, but before we will see the code of our DirectoryItem user control. This object will be used to populate our list, one item for each directory. To declare a user control things are a bit more complicated (but also in this case XAML2CPP will write most of the “boilerplate” code for use. To interact with a user control you should declare an interface. An interface defines the functions of a user control that can be called inside the application code. Our custom control is currently quite simple and we just need some member functions to store and retrieve a full pathname inside our control. The control will display just the last part of the path inside the control. An interface is declared as a C++ class that has only abstract virtual members. It should also have an UUID associated with it. UUID means Universal Unique IDentifier and it’s a 128 bit number that will identify our interface without the need of specifying its fully qualified name. UUIDs are used to identify COM interfaces and, as we discovered in chapter one, Silverlight for Windows Embedded is based on COM or, at least, provides a COM-like Application Programming Interface (API). Here’s the declaration of the DirectoryItem interface: class __declspec(novtable,uuid("{D38C66E5-2725-4111-B422-D75B32AA8702}")) IDirectoryItem : public IXRCustomUserControl { public:   virtual HRESULT SetFullPath(BSTR fullpath) = 0; virtual HRESULT GetFullPath(BSTR* retval) = 0; }; The interface is derived from IXRCustomControl, this will allow us to add our object to a XAML tree. It declares the two functions needed to set and get the full path, but don’t implement them. Implementation will be done inside the control class. The interface only defines the functions of our control class that are accessible from the outside. It’s a sort of “contract” between our control and the applications that will use it. We must support what’s inside the contract and the application code should know nothing else about our own control. To reference our interface we will use the UUID, to make code more readable we can declare a #define in this way: #define IID_IDirectoryItem __uuidof(IDirectoryItem) Silverlight for Windows Embedded objects (like COM objects) use a reference counting mechanism to handle object destruction. Every time you store a pointer to an object you should call its AddRef function and every time you no longer need that pointer you should call Release. The object keeps an internal counter, incremented for each AddRef and decremented on Release. When the counter reaches 0, the object is destroyed. Managing reference counting in our code can be quite complicated and, since we are lazy (I am, at least!), we will use a great feature of Silverlight for Windows Embedded: smart pointers.A smart pointer can be connected to a Silverlight for Windows Embedded object and manages its reference counting. To declare a smart pointer we must use the XRPtr template: typedef XRPtr<IDirectoryItem> IDirectoryItemPtr; Now that we have defined our interface, it’s time to implement our user control class. XAML2CPP has implemented a class for us, and we have only to derive our class from it, defining the main class and interface of our new custom control: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { ... } XAML2CPP has generated some code for us to support the user control, we don’t have to mind too much about that code, since it will be generated (or written by hand, if you like) always in the same way, for every user control. But knowing how does this works “under the hood” is still useful to understand the architecture of Silverlight for Windows Embedded. Our base class declaration is a bit more complex than the one we used for a simple page in the previous chapters: template <class A,class B> class DirectoryItemUserControlRegister : public XRCustomUserControlImpl<A,B>,public TDirectoryItem<A,XAML2CPPUserControl> { ... } This class derives from the XAML2CPP generated template class, like the ListPage class, but it uses XAML2CPPUserControl for the implementation of some features. This class shares the same ancestor of XAML2CPPPage (base class for “regular” XAML pages), XAML2CPPBase, implements binding of member variables and event handlers but, instead of loading and creating its own XAML tree, it attaches to an existing one. The XAML tree (and UI) of our custom control is created and loaded by the XRCustomUserControlImpl class. This class is part of the Silverlight for Windows Embedded framework and implements most of the functions needed to build-up a custom control in Silverlight (the guys that developed Silverlight for Windows Embedded seem to care about lazy programmers!). We have just to initialize it, providing our class (DirectoryItem) and interface (IDirectoryItem). Our user control class has also a static member: protected:   static HINSTANCE hInstance; This is used to store the HINSTANCE of the modules that contain our user control class. I don’t like this implementation, but I can’t find a better one, so if somebody has good ideas about how to handle the HINSTANCE object, I’ll be happy to hear suggestions! It also implements two static members required by XRCustomUserControlImpl. The first one is used to load the XAML UI of our custom control: static HRESULT GetXamlSource(XRXamlSource* pXamlSource) { pXamlSource->SetResource(hInstance,TEXT("XAML"),IDR_XAML_DirectoryItem); return S_OK; }   It initializes a XRXamlSource object, connecting it to the XAML resource that XAML2CPP has included in our resource script. The other method is used to register our custom control, allowing Silverlight for Windows Embedded to create it when it load some XAML or when an application creates a new control at runtime (more about this later): static HRESULT Register() { return XRCustomUserControlImpl<A,B>::Register(__uuidof(B), L"DirectoryItem", L"clr-namespace:DirectoryItemNamespace"); } To register our control we should provide its interface UUID, the name of the corresponding element in the XAML tree and its current namespace (namespaces compatible with Silverlight must use the “clr-namespace” prefix. We may also register additional properties for our objects, allowing them to be loaded and saved inside XAML. In this case we have no permanent properties and the Register method will just register our control. An additional static method is implemented to allow easy registration of our custom control inside our application WinMain function: static HRESULT RegisterUserControl(HINSTANCE hInstance) { DirectoryItemUserControlRegister::hInstance=hInstance; return DirectoryItemUserControlRegister<A,B>::Register(); } Now our control is registered and we will be able to create it using the Silverlight for Windows Embedded runtime functions. But we need to bind our members and event handlers to have them available like we are used to do for other XAML2CPP generated objects. To bind events and members we need to implement the On_Loaded function: virtual HRESULT OnLoaded(__in IXRDependencyObject* pRoot) { HRESULT retcode; IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; return ((A*)this)->Init(pRoot,hInstance,app); } This function will call the XAML2CPPUserControl::Init member that will connect the “root” member with the XAML sub tree that has been created for our control and then calls BindObjects and BindEvents to bind members and events to our code. Now we can go back to our application code (the code that you’ll have to actually write) to see the contents of our DirectoryItem class: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { protected:   WCHAR fullpath[_MAX_PATH+1];   public:   DirectoryItem() { *fullpath=0; }   virtual HRESULT SetFullPath(BSTR fullpath) { wcscpy_s(this->fullpath,fullpath);   WCHAR* p=fullpath;   for(WCHAR*q=wcsstr(p,L"\\");q;p=q+1,q=wcsstr(p,L"\\")) ;   Name->SetText(p); return S_OK; }   virtual HRESULT GetFullPath(BSTR* retval) { *retval=SysAllocString(fullpath); return S_OK; } }; It’s pretty easy and contains a fullpath member (used to store that path of the directory connected with the user control) and the implementation of the two interface members that can be used to set and retrieve the path. The SetFullPath member parses the full path and displays just the last branch directory name inside the “Name” TextBlock object. As you can see, implementing a user control in Silverlight for Windows Embedded is not too complex and using XAML also for the UI of the control allows us to re-use the same mechanisms that we learnt and used in the previous steps of our tutorial. Now let’s see how the main page is managed by the ListPage class. class ListPage : public TListPage<ListPage> { protected:   // current path TCHAR curpath[_MAX_PATH+1]; It has a member named “curpath” that is used to store the current directory. It’s initialized inside the constructor: ListPage() { *curpath=0; } And it’s value is displayed inside the “CurrentDir” TextBlock inside the initialization function: virtual HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TListPage<ListPage>::Init(hInstance,app))) return retcode;   CurrentDir->SetText(L"\\"); return S_OK; } The FillFileList function is used to enumerate subdirectories of the current dir and add entries for each one inside the list box that fills most of the client area of our main page: HRESULT FillFileList() { HRESULT retcode; IXRItemCollectionPtr items; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; // retrieves the items contained in the listbox if (FAILED(retcode=FileList->GetItems(&items))) return retcode;   // clears the list if (FAILED(retcode=items->Clear())) return retcode;   // enumerates files and directory in the current path WCHAR filemask[_MAX_PATH+1];   wcscpy_s(filemask,curpath); wcscat_s(filemask,L"\\*.*");   WIN32_FIND_DATA finddata; HANDLE findhandle;   findhandle=FindFirstFile(filemask,&finddata);   // the directory is empty? if (findhandle==INVALID_HANDLE_VALUE) return S_OK;   do { if (finddata.dwFileAttributes&=FILE_ATTRIBUTE_DIRECTORY) { IXRListBoxItemPtr listboxitem;   // add a new item to the listbox if (FAILED(retcode=app->CreateObject(IID_IXRListBoxItem,&listboxitem))) { FindClose(findhandle); return retcode; }   if (FAILED(retcode=items->Add(listboxitem,NULL))) { FindClose(findhandle); return retcode; }   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=app->CreateObject(IID_IDirectoryItem,&directoryitem))) { FindClose(findhandle); return retcode; }   WCHAR fullpath[_MAX_PATH+1];   wcscpy_s(fullpath,curpath); wcscat_s(fullpath,L"\\"); wcscat_s(fullpath,finddata.cFileName);   if (FAILED(retcode=directoryitem->SetFullPath(fullpath))) { FindClose(findhandle); return retcode; }   XAML2CPPXRValue value((IXRDependencyObject*)directoryitem);   if (FAILED(retcode=listboxitem->SetContent(&value))) { FindClose(findhandle); return retcode; } } } while (FindNextFile(findhandle,&finddata));   FindClose(findhandle); return S_OK; } This functions retrieve a pointer to the collection of the items contained in the directory listbox. The IXRItemCollection interface is used by listboxes and comboboxes and allow you to clear the list (using Clear(), as our function does at the beginning) and change its contents by adding and removing elements. This function uses the FindFirstFile/FindNextFile functions to enumerate all the objects inside our current directory and for each subdirectory creates a IXRListBoxItem object. You can insert any kind of control inside a list box, you don’t need a IXRListBoxItem, but using it will allow you to handle the selected state of an item, highlighting it inside the list. The function creates a list box item using the CreateObject function of XRApplication. The same function is then used to create an instance of our custom control. The function returns a pointer to the control IDirectoryItem interface and we can use it to store the directory full path inside the object and add it as content of the IXRListBox item object, adding it to the listbox contents. The listbox generates an event (SelectionChanged) each time the user clicks on one of the items contained in the listbox. We implement an event handler for that event and use it to change our current directory and repopulate the listbox. The current directory full path will be displayed in the TextBlock: HRESULT Filelist_SelectionChanged(IXRDependencyObject* source,XRSelectionChangedEventArgs* args) { HRESULT retcode;   IXRListBoxItemPtr listboxitem;   if (!args->pAddedItem) return S_OK;   if (FAILED(retcode=args->pAddedItem->QueryInterface(IID_IXRListBoxItem,(void**)&listboxitem))) return retcode;   XRValue content; if (FAILED(retcode=listboxitem->GetContent(&content))) return retcode;   if (content.vType!=VTYPE_OBJECT) return E_FAIL;   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=content.pObjectVal->QueryInterface(IID_IDirectoryItem,(void**)&directoryitem))) return retcode;   content.pObjectVal->Release(); content.pObjectVal=NULL;   BSTR fullpath=NULL;   if (FAILED(retcode=directoryitem->GetFullPath(&fullpath))) return retcode;   CurrentDir->SetText(fullpath);   wcscpy_s(curpath,fullpath); FillFileList(); SysFreeString(fullpath);     return S_OK; } }; The function uses the pAddedItem member of the XRSelectionChangedEventArgs object to retrieve the currently selected item, converts it to a IXRListBoxItem interface using QueryInterface, and then retrives its contents (IDirectoryItem object). Using the GetFullPath method we can get the full path of our selected directory and assing it to the curdir member. A call to FillFileList will update the listbox contents, displaying the list of subdirectories of the selected folder. To build our sample we just need to add code to our WinMain function: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1;   if (FAILED(retcode=DirectoryItem::RegisterUserControl(hInstance))) return retcode;   ListPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   page.FillFileList();   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return 0; } This code is very similar to the one of the WinMains of our previous samples. The main differences are that we register our custom control (you should do that as soon as you have initialized the XAML runtime) and call FillFileList after the initialization of our ListPage object to load the contents of the root folder of our device inside the listbox. As usual you can download the full sample source code from here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/ListBoxTest.zip

    Read the article

  • Core Data error when assigning variable with one-to-one relationship

    - by Hoang Pham
    I tried to assign a managed object (C) with its property another managed object (B) (a one-to-one relationship) in which this other managed object (B) has a to-many relationship with one other managed object (A). There is an error from this assignment in which I copied as follows: #0 0x020e53a7 in ___forwarding___ #1 0x020c16c2 in __forwarding_prep_0___ #2 0x02078988 in CFRetain #3 0x0207a728 in CFSetAddValue #4 0x020c2fb2 in CFSetCreate #5 0x01e51ce8 in -[_NSFaultingMutableSet copyWithZone:] #6 0x020afcca in -[NSObject copy] #7 0x01e50d22 in -[NSManagedObject(_NSInternalMethods) _newPropertiesForRetainedTypes:andCopiedTypes:preserveFaults:] #8 0x01e51aa0 in -[NSManagedObject(_NSInternalMethods) _newAllPropertiesWithRelationshipFaultsIntact__] #9 0x01e519b4 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _establishEventSnapshotsForObject:] #10 0x01e51866 in _PFFastMOCObjectWillChange #11 0x01e516c5 in _PF_ManagedObject_WillChangeValueForKeyIndex #12 0x01e51525 in _sharedIMPL_setvfk_core #13 0x01e51483 in _PF_Handler_Public_SetProperty #14 0x01e546d1 in -[NSManagedObject(_NSInternalMethods) _didChangeValue:forRelationship:named:withInverse:] #15 0x0030ec1e in NSKVONotify #16 0x002aae2a in -[NSObject(NSKeyValueObserverNotification) didChangeValueForKey:] #17 0x01e5212f in _PF_ManagedObject_DidChangeValueForKeyIndex #18 0x01e515b1 in _sharedIMPL_setvfk_core #19 0x01e55827 in _svfk_5 I don't understand very well what the exact description of this error is. Can someone explain to me what it is and how to solve this one. Note that all other assignments in which the managed object B does not have any A items do not raise this error. ObjectC *objectC = [NSEntityDescription insertNewObjectForEntityForName:@"ObjectC" inManagedObjectContext:managedObjectContext]; objectC.objectB = objectB; Thank you in advance. I added some more NSZombieEnabled/MallocStackLogging generated log: 2010-05-18 17:28:05.327 Foo[2069:207] *** -[CFSet retain]: message sent to deallocated instance 0x800c880 (gdb) shell malloc_history 207 0x800c880 malloc_history cannot examine process 207 because the process does not exist. (gdb) shell malloc_history 2069 0x800c880 ALLOC 0x800c880-0x800c884 [size=5]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _endElementNs | -[Parser parser:didEndElement:namespaceURI:qualifiedName:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | strdup | malloc | malloc_zone_malloc ---- FREE 0x800c880-0x800c884 [size=5]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _endElementNs | -[Parser parser:didEndElement:namespaceURI:qualifiedName:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_free | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asl_set_query | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | asprintf | malloc | malloc_zone_malloc ---- FREE 0x800c860-0x800c8df [size=128]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlParseCharData | _characters | -[Parser parser:foundCharacters:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | asl_send | _asl_send_level_message | free ALLOC 0x800c700-0x800c893 [size=404]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | CFCalendarDecomposeAbsoluteTime | _CFCalendarDecomposeAbsoluteTimeV | __CFCalendarSetupCal | __CFCalendarCreateUCalendar | ucal_open | icu::Calendar::createInstance(icu::TimeZone*, icu::Locale const&, UErrorCode&) | malloc | malloc_zone_malloc ---- FREE 0x800c700-0x800c893 [size=404]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | NSLog | NSLogv | _CFLogvEx | __CFLogCString | _CFRelease | free ALLOC 0x800c880-0x800c8c7 [size=72]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[Step2ViewController downloadData] | -[Parser downloadVariantsWithPin:forTerminal:] | -[Parser parseByNSXMLParser:] | -[NSXMLParser parse] | xmlParseChunk | xmlIOParseDTD | _startElementNs | -[Parser parser:didStartElement:namespaceURI:qualifiedName:attributes:] | +[NSEntityDescription insertNewObjectForEntityForName:inManagedObjectContext:] | +[NSManagedObject(_PFDynamicAccessorsAndPropertySupport) allocWithEntity:] | _PFAllocateObject | malloc_zone_calloc ---- FREE 0x800c880-0x800c8c7 [size=72]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | _performRunLoopAction | -[_PFManagedObjectReferenceQueue _processReferenceQueue:] | _PFDeallocateObject | malloc_zone_free ALLOC 0x800c880-0x800c8a7 [size=40]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[TileLayer display] | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | WebCore::TiledSurface::drawLayer(CALayer*, CGContext*) | WKWindowDrawRect | WKViewDisplayRect | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | -[WebHTMLView drawSingleRect:] | -[WebFrame(WebInternal) _drawRect:contentsOnly:] | WebCore::FrameView::paintContents(WebCore::GraphicsContext*, WebCore::IntRect const&) | WebCore::RenderLayer::paint(WebCore::GraphicsContext*, WebCore::IntRect const&, WebCore::PaintRestriction, WebCore::RenderObject*) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderFlow::paintLines(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RootInlineBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineFlowBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineTextBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::paintTextWithShadows(WebCore::GraphicsContext*, WebCore::Font const&, WebCore::TextRun const&, int, int, WebCore::IntPoint const&, int, int, int, int, WebCore::ShadowData*, bool) | WebCore::GraphicsContext::drawText(WebCore::Font const&, WebCore::TextRun const&, WebCore::IntPoint const&, int, int) | WebCore::Font::drawSimpleText(WebCore::GraphicsContext*, WebCore::TextRun const&, WebCore::FloatPoint const&, int, int) const | WebCore::Font::drawGlyphBuffer(WebCore::GraphicsContext*, WebCore::GlyphBuffer const&, WebCore::TextRun const&, WebCore::FloatPoint&) const | WebCore::Font::drawGlyphs(WebCore::GraphicsContext*, WebCore::SimpleFontData const*, WebCore::GlyphBuffer const&, int, int, WebCore::FloatPoint const&, bool) const | CGGStateSetFont | maybeCopyTextState | calloc | malloc_zone_calloc ---- FREE 0x800c880-0x800c8a7 [size=40]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[TileLayer display] | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | WebCore::TiledSurface::drawLayer(CALayer*, CGContext*) | WKWindowDrawRect | WKViewDisplayRect | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | _WKViewDraw(CGContext*, WKView*, CGRect) | -[WebHTMLView drawSingleRect:] | -[WebFrame(WebInternal) _drawRect:contentsOnly:] | WebCore::FrameView::paintContents(WebCore::GraphicsContext*, WebCore::IntRect const&) | WebCore::RenderLayer::paint(WebCore::GraphicsContext*, WebCore::IntRect const&, WebCore::PaintRestriction, WebCore::RenderObject*) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderLayer::paintLayer(WebCore::RenderLayer*, WebCore::GraphicsContext*, WebCore::IntRect const&, bool, WebCore::PaintRestriction, WebCore::RenderObject*, bool, bool) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintChildren(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderBlock::paintObject(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RenderFlow::paintLines(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::RootInlineBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineFlowBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::InlineTextBox::paint(WebCore::RenderObject::PaintInfo&, int, int) | WebCore::paintTextWithShadows(WebCore::GraphicsContext*, WebCore::Font const&, WebCore::TextRun const&, int, int, WebCore::IntPoint const&, int, int, int, int, WebCore::ShadowData*, bool) | WebCore::GraphicsContext::restorePlatformState() | CGContextRestoreGState | CGGStackRestore | CGGStateRelease | textStateRelease | free ALLOC 0x800c880-0x800c8bf [size=64]: thread_a0a8c4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | CA::timer_callback(__CFRunLoopTimer*, void*) | run_animation_callbacks(double, void*) | -[UIViewAnimationState animationDidStop:finished:] | -[UIViewAnimationState sendDelegateAnimationDidStop:finished:] | -[UINavigationTransitionView _navigationTransitionDidStop] | -[UIView(Hierarchy) removeFromSuperview] | -[UITextField resignFirstResponder] | -[UIFieldEditor resignFirstResponder] | -[UIKeyboardImpl setDelegate:] | -[UIKeyboardImpl setDelegate:force:] | -[UITextInteractionAssistant setGestureRecognizers] | -[UITextInteractionAssistant addTwoFingerRangedSelectRecognizer] | -[UILongPressGestureRecognizer initWithTarget:action:] | -[__NSPlaceholderSet init] | -[__NSPlaceholderSet initWithCapacity:] | __CFSetInit | _CFRuntimeCreateInstance | malloc_zone_malloc

    Read the article

  • memory leak error when using an iterator

    - by Adnane Jaafari
    please i'm having this error if any one can explain it : while using an iterator in my methode public void createDemandeP() { if (demandep.getDateDebut().after(demandep.getDateFin())) { FacesContext .getCurrentInstance() .addMessage( null, new FacesMessage(FacesMessage.SEVERITY_WARN, "Attention aux dates", "la date de debut doit être avant la date de fin!")); } else if (demandep.getDateDebut().before(demandep.getDateFin())) { List<DemandeP> list = new ArrayList<DemandeP>(); list.addAll(chaletService.getChaletBylibelle(chaletChoisi).get(0) .getListDemandesP()); Iterator<DemandeP> it = list.iterator(); DemandeP d = it.next(); while (it.hasNext()) { if ((d.getDateDebut().compareTo(demandep.getDateDebut()) == 0) || (d.getDateFin().compareTo(demandep.getDateDebut()) == 0) || (d.getDateFin().compareTo(demandep.getDateFin()) == 0) || (d.getDateDebut().compareTo(demandep.getDateDebut()) == 0) || (d.getDateDebut().before(demandep.getDateDebut()) && d .getDateFin().after(demandep.getDateFin())) || (d.getDateDebut().before(demandep.getDateFin()) && d .getDateDebut().after(demandep.getDateDebut())) || (d.getDateFin().after(demandep.getDateDebut()) && d .getDateFin().before(demandep.getDateFin()))) { FacesContext.getCurrentInstance().getMessageList().clear(); FacesContext .getCurrentInstance() .addMessage( null, new FacesMessage( FacesMessage.SEVERITY_FATAL, "Periode Ou chalet indisponicle ", "Veillez choisir une autre marge de date !")); } } } else { demandep.setEtat("En traitement"); DateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy"); Date date = new Date(); try { demandep.setDateDemande(dateFormat.parse(dateFormat .format(date))); } catch (ParseException e) { System.out.println("errooor date"); e.printStackTrace(); } nameUser = auth.getName(); // System.out.println(nameUser); adherent = utilisateurService.findAdherentByNom(nameUser).get(0); demandep.setUtilisateur(adherent); // System.out.println(chaletService.getChaletBylibelle(chaletChoisi).get(0).getLibelle()); demandep.setChalet(chaletService.getChaletBylibelle(chaletChoisi) .get(0)); demandep.setNouvelleDemande(true); demandePService.ajouterDemandeP(demandep); } } oct. 23, 2013 7:19:30 PM org.apache.catalina.core.StandardContext reload INFO: Le rechargement du contexte [/ONICLFINAL] a démarré oct. 23, 2013 7:19:30 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:31 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:32 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:32 PM org.apache.catalina.core.ApplicationContext log INFO: Closing Spring root WebApplicationContext oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/ONICLFINAL] registered the JDBC driver [com.mysql.jdbc.Driver] b but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/ONICLFINAL] appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SE VERE: The web application [/ONICLFINAL] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Hibernate Sessions registered for deferred close]) and a value of type [java.util.HashMap] (value [{org.hibernate.impl.SessionFactoryImpl@f6e256=[SessionImpl(PersistenceContext[entityKeys=[EntityKey[bo.DemandeP#1], EntityKey[bo.Utilisateur#3], EntityKey[bo.Chalet#1], EntityKey[bo.Role#2], EntityKey[bo.DemandeP#2]],collectionKeys=[CollectionKey[bo.Role.ListeUsers#2], CollectionKey[bo.Chalet.listPeriodes#1], CollectionKey[bo.Utilisateur.demandes#3], CollectionKey[bo.Utilisateur.demandesP#3], CollectionKey[bo.Chalet.listDemandesP#1]]];ActionQueue[insertions=[] updates=[] deletions=[] collectionCreations=[] collectionRemovals=[] collectionUpdates=[]])]}]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Request attributes]) and a value of type [org.springframework.web.context.request.ServletRequestAttributes] (value [org.apache.catalina.connector.RequestFacade@17f3488]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@51f78b]) and a value of type [org.springframework.security.core.context.SecurityContextImpl] (value [org.springframework.security.core.context.SecurityContextImpl@8e463c8b: Authentication: org.springframework.security.authentication.UsernamePasswordAuthenticationToken@8e463c8b: Principal: org.springframework.security.core.userdetails.User@311aa119: Username: maatouf; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true; credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities: ROLE_ADHER; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails@ffff4c9c: RemoteIpAddress: 0:0:0:0:0:0:0:1; SessionId: 14CD5D4E8E0E3AEB0367AB7115038FED; Granted Authorities: ROLE_ADHER]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@152e9b7]) and a value of type [net.sf.cglib.proxy.Callback[]] (value [[Lnet.sf.cglib.proxy.Callback;@6e1f4c]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [javax.faces.context.FacesContext$1] (value [javax.faces.context.FacesContext$1@9ecc6d]) and a value of type [com.sun.faces.context.FacesContextImpl] (value [com.sun.faces.context.FacesContextImpl@1c8bbed]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@1a9e75f]) and a value of type [com.sun.faces.context.FacesContextImpl] (value [com.sun.faces.context.FacesContextImpl@1c8bbed]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Locale context]) and a value of type [org.springframework.context.i18n.SimpleLocaleContext] (value [fr_FR]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [com.sun.faces.application.ApplicationAssociate$1] (value [com.sun.faces.application.ApplicationAssociate$1@195266b]) and a value of type [com.sun.faces.application.ApplicationAssociate] (value [com.sun.faces.application.ApplicationAssociate@10d595c]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:33 PM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(D:\newWorkSpace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\ONICLF INAL\WEB-INF\lib\servlet-api-2.5.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class oct. 23, 2013 7:19:33 PM org.apache.catalina.core.ApplicationContext log

    Read the article

< Previous Page | 435 436 437 438 439 440 441  | Next Page >