Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 237/980 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Use variables entered in login page usable in multiple pages?

    - by deception1
    I have a Login page that captures User input like this. MD5calc ss = new DBCon.MD5calc(); string gs = ss.CalculateMD5Hash(password.Password); int unitID = Convert.ToInt32(Unit_ID.Text); logBO.UnitID = unitID; logBO.UserID = User_name.Text; logBO.UserPass = gs; How would i make them assignable to any other page i created.My Common sense says that creating a static class would be enough,but will it?If i do create a static class where would i put it and how would i call it?I actually need those variable to use in my Sql Stored procedures.

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • Doing XML extracts with XSLT without having to read the whole DOM tree into memory?

    - by Thorbjørn Ravn Andersen
    I have a situation where I want to extract some information from some very large but regular XML files (just had to do it with a 500 Mb file), and where XSLT would be perfect. Unfortunately those XSLT implementations I am aware of (except the most expensive version of Saxon) does not support only having the necessary part of the DOM read in but reads in the whole tree. This cause the computer to swap to death. The XPath in question is //m/e[contains(.,'foobar') so it is essentially just a grep. Is there an XSLT implementation which can do this? Or an XSLT implementation which given suitable "advice" can do this trick of pruning away the parts in memory which will not be needed again? I'd prefer a Java implementation but both Windows and Linux are viable native platforms. EDIT: The input XML looks like: <log> <!-- Fri Jun 26 12:09:27 CEST 2009 --> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Registering Catalina:type=Manager,path=/axsWHSweb-20090626,host=localhost</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Force random number initialization starting</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Getting message digest component for algorithm MD5</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Completed getting message digest component</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>getDigest() 0</m></e> ...... </log> Essentialy I want to select some m-nodes (and I know the XPath is wrong for that, it was just a quick hack), but maintain the XML layout. EDIT: It appears that STX may be what I am looking for (I can live with another transformation language), and that Joost is an implementation hereof. Any experiences? EDIT: I found that Saxon 6.5.4 with -Xmx1500m could load my XML, so this allowed me to use my XPaths right now. This is just a lucky stroke so I'd still like to solve this generically - this means scriptable which in turn means no handcrafted Java filtering first. EDIT: Oh, by the way. This is a log file very similar to what is generated by the log4j XMLLayout. The reason for XML is to be able to do exactly this, namely do queries on the log. This is the initial try, hence the simple question. Later I'd like to be able to ask more complex questions - therefore I'd like the query language to be able to handle the input file.

    Read the article

  • How to read oom-killer syslog messages?

    - by Grant
    I have a Ubuntu 12.04 server which sometimes dies completely - no SSH, no ping, nothing until it is physically rebooted. After the reboot, I see in syslog that the oom-killer killed, well, pretty much everything. There's a lot of detailed memory usage information in them. How do I read these logs to see what caused the OOM issue? The server has far more memory than it needs, so it shouldn't be running out of memory. Oct 25 07:28:04 nldedip4k031 kernel: [87946.529511] oom_kill_process: 9 callbacks suppressed Oct 25 07:28:04 nldedip4k031 kernel: [87946.529514] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529516] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529518] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:04 nldedip4k031 kernel: [87946.529519] Call Trace: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529525] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529528] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529530] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529532] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529535] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529537] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529541] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529543] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529546] [] vfs_read+0x8c/0x160 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529548] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529550] [] sys_read+0x3d/0x70 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529554] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529555] Mem-Info: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529556] DMA per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529557] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529558] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529560] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529561] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529562] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529563] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529564] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529565] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529566] Normal per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529567] CPU 0: hi: 186, btch: 31 usd: 179 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529568] CPU 1: hi: 186, btch: 31 usd: 182 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529569] CPU 2: hi: 186, btch: 31 usd: 132 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529570] CPU 3: hi: 186, btch: 31 usd: 175 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529571] CPU 4: hi: 186, btch: 31 usd: 91 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529572] CPU 5: hi: 186, btch: 31 usd: 173 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529573] CPU 6: hi: 186, btch: 31 usd: 159 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529574] CPU 7: hi: 186, btch: 31 usd: 164 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529575] HighMem per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529576] CPU 0: hi: 186, btch: 31 usd: 165 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529577] CPU 1: hi: 186, btch: 31 usd: 183 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529578] CPU 2: hi: 186, btch: 31 usd: 185 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529579] CPU 3: hi: 186, btch: 31 usd: 138 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529580] CPU 4: hi: 186, btch: 31 usd: 155 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529581] CPU 5: hi: 186, btch: 31 usd: 104 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529582] CPU 6: hi: 186, btch: 31 usd: 133 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529583] CPU 7: hi: 186, btch: 31 usd: 170 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_anon:5523 inactive_anon:354 isolated_anon:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_file:2815 inactive_file:6849119 isolated_file:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] unevictable:0 dirty:449 writeback:10 unstable:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] free:1304125 slab_reclaimable:104672 slab_unreclaimable:3419 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529588] mapped:2661 shmem:138 pagetables:313 bounce:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529591] DMA free:4252kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:4kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11564kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1 all_unreclaimable? yes Oct 25 07:28:04 nldedip4k031 kernel: [87946.529594] lowmem_reserve[]: 0 869 32460 32460 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529599] Normal free:44052kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:616kB inactive_file:568kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:407124kB slab_unreclaimable:13672kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2083 all_unreclaimable? yes Oct 25 07:28:04 nldedip4k031 kernel: [87946.529602] lowmem_reserve[]: 0 0 252733 252733 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529606] HighMem free:5168196kB min:512kB low:402312kB high:804112kB active_anon:22092kB inactive_anon:1416kB active_file:10640kB inactive_file:27395920kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:1796kB writeback:40kB mapped:10640kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 25 07:28:04 nldedip4k031 kernel: [87946.529609] lowmem_reserve[]: 0 0 0 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529611] DMA: 6*4kB 6*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4232kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529616] Normal: 297*4kB 180*8kB 119*16kB 73*32kB 67*64kB 47*128kB 35*256kB 13*512kB 5*1024kB 1*2048kB 1*4096kB = 44052kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529622] HighMem: 1*4kB 6*8kB 27*16kB 11*32kB 2*64kB 1*128kB 0*256kB 0*512kB 4*1024kB 1*2048kB 1260*4096kB = 5168196kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529627] 6852076 total pagecache pages Oct 25 07:28:04 nldedip4k031 kernel: [87946.529628] 0 pages in swap cache Oct 25 07:28:04 nldedip4k031 kernel: [87946.529629] Swap cache stats: add 0, delete 0, find 0/0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529630] Free swap = 3998716kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529631] Total swap = 3998716kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.571914] 8437743 pages RAM Oct 25 07:28:04 nldedip4k031 kernel: [87946.571916] 8209409 pages HighMem Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 159556 pages reserved Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 6862034 pages shared Oct 25 07:28:04 nldedip4k031 kernel: [87946.571918] 123540 pages non-shared Oct 25 07:28:04 nldedip4k031 kernel: [87946.571919] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 25 07:28:04 nldedip4k031 kernel: [87946.571927] [ 421] 0 421 709 152 3 0 0 upstart-udev-br Oct 25 07:28:04 nldedip4k031 kernel: [87946.571929] [ 429] 0 429 773 326 5 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571931] [ 567] 0 567 772 224 4 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571932] [ 568] 0 568 772 231 7 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571934] [ 764] 0 764 712 103 1 0 0 upstart-socket- Oct 25 07:28:04 nldedip4k031 kernel: [87946.571936] [ 772] 103 772 815 164 5 0 0 dbus-daemon Oct 25 07:28:04 nldedip4k031 kernel: [87946.571938] [ 785] 0 785 1671 600 1 -17 -1000 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571940] [ 809] 101 809 7766 380 1 0 0 rsyslogd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571942] [ 869] 0 869 1158 213 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571943] [ 873] 0 873 1158 214 6 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571945] [ 911] 0 911 1158 215 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571947] [ 912] 0 912 1158 214 2 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571949] [ 914] 0 914 1158 213 1 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571950] [ 916] 0 916 618 86 1 0 0 atd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571952] [ 917] 0 917 655 226 3 0 0 cron Oct 25 07:28:04 nldedip4k031 kernel: [87946.571954] [ 948] 0 948 902 159 3 0 0 irqbalance Oct 25 07:28:04 nldedip4k031 kernel: [87946.571956] [ 993] 0 993 1145 363 3 0 0 master Oct 25 07:28:04 nldedip4k031 kernel: [87946.571957] [ 1002] 104 1002 1162 333 1 0 0 qmgr Oct 25 07:28:04 nldedip4k031 kernel: [87946.571959] [ 1016] 0 1016 730 149 2 0 0 mdadm Oct 25 07:28:04 nldedip4k031 kernel: [87946.571961] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571963] [ 1086] 0 1086 1158 213 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571965] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571967] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571969] [ 1090] 33 1090 6175 1451 3 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571971] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571972] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571974] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571976] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571978] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr Oct 25 07:28:04 nldedip4k031 kernel: [87946.571980] [ 2475] 0 2475 2435 812 0 0 0 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571982] [ 2494] 0 2494 1745 839 1 0 0 bash Oct 25 07:28:04 nldedip4k031 kernel: [87946.571984] [ 2573] 0 2573 3394 1689 0 0 0 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571986] [ 2589] 0 2589 5014 457 3 0 0 rsync Oct 25 07:28:04 nldedip4k031 kernel: [87946.571988] [ 2590] 0 2590 7970 522 1 0 0 rsync Oct 25 07:28:04 nldedip4k031 kernel: [87946.571990] [ 2652] 104 2652 1150 326 5 0 0 pickup Oct 25 07:28:04 nldedip4k031 kernel: [87946.571992] Out of memory: Kill process 421 (upstart-udev-br) score 1 or sacrifice child Oct 25 07:28:04 nldedip4k031 kernel: [87946.572407] Killed process 421 (upstart-udev-br) total-vm:2836kB, anon-rss:156kB, file-rss:452kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.573107] init: upstart-udev-bridge main process (421) killed by KILL signal Oct 25 07:28:04 nldedip4k031 kernel: [87946.573126] init: upstart-udev-bridge main process ended, respawning Oct 25 07:28:34 nldedip4k031 kernel: [87976.461570] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461573] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461576] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:34 nldedip4k031 kernel: [87976.461578] Call Trace: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461585] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461588] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461591] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461595] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461599] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461602] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461606] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461609] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461613] [] vfs_read+0x8c/0x160 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461616] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461619] [] sys_read+0x3d/0x70 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461624] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461626] Mem-Info: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461628] DMA per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461629] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461631] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461633] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461634] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461636] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461638] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461639] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461641] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461642] Normal per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461644] CPU 0: hi: 186, btch: 31 usd: 61 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461646] CPU 1: hi: 186, btch: 31 usd: 49 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461647] CPU 2: hi: 186, btch: 31 usd: 8 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461649] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461651] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461652] CPU 5: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461654] CPU 6: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461656] CPU 7: hi: 186, btch: 31 usd: 30 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461657] HighMem per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461658] CPU 0: hi: 186, btch: 31 usd: 4 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461660] CPU 1: hi: 186, btch: 31 usd: 204 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461662] CPU 2: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461663] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461665] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461667] CPU 5: hi: 186, btch: 31 usd: 31 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461668] CPU 6: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461670] CPU 7: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_anon:5441 inactive_anon:412 isolated_anon:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_file:2668 inactive_file:6922842 isolated_file:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461675] unevictable:0 dirty:836 writeback:0 unstable:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461676] free:1231664 slab_reclaimable:105781 slab_unreclaimable:3399 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461677] mapped:2649 shmem:138 pagetables:313 bounce:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461682] DMA free:4248kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11560kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5687 all_unreclaimable? yes Oct 25 07:28:34 nldedip4k031 kernel: [87976.461686] lowmem_reserve[]: 0 869 32460 32460 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461693] Normal free:44184kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:20kB inactive_file:1096kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:4kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:411564kB slab_unreclaimable:13592kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1816 all_unreclaimable? yes Oct 25 07:28:34 nldedip4k031 kernel: [87976.461697] lowmem_reserve[]: 0 0 252733 252733 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461703] HighMem free:4878224kB min:512kB low:402312kB high:804112kB active_anon:21764kB inactive_anon:1648kB active_file:10652kB inactive_file:27690268kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:3340kB writeback:0kB mapped:10592kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 25 07:28:34 nldedip4k031 kernel: [87976.461708] lowmem_reserve[]: 0 0 0 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461711] DMA: 8*4kB 7*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4248kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461719] Normal: 272*4kB 178*8kB 76*16kB 52*32kB 42*64kB 36*128kB 23*256kB 20*512kB 7*1024kB 2*2048kB 1*4096kB = 44176kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461727] HighMem: 1*4kB 45*8kB 31*16kB 24*32kB 5*64kB 3*128kB 1*256kB 2*512kB 4*1024kB 2*2048kB 1188*4096kB = 4877852kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461736] 6925679 total pagecache pages Oct 25 07:28:34 nldedip4k031 kernel: [87976.461737] 0 pages in swap cache Oct 25 07:28:34 nldedip4k031 kernel: [87976.461739] Swap cache stats: add 0, delete 0, find 0/0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461740] Free swap = 3998716kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461741] Total swap = 3998716kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.524951] 8437743 pages RAM Oct 25 07:28:34 nldedip4k031 kernel: [87976.524953] 8209409 pages HighMem Oct 25 07:28:34 nldedip4k031 kernel: [87976.524954] 159556 pages reserved Oct 25 07:28:34 nldedip4k031 kernel: [87976.524955] 6936141 pages shared Oct 25 07:28:34 nldedip4k031 kernel: [87976.524956] 124602 pages non-shared Oct 25 07:28:34 nldedip4k031 kernel: [87976.524957] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 25 07:28:34 nldedip4k031 kernel: [87976.524966] [ 429] 0 429 773 326 5 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524968] [ 567] 0 567 772 224 4 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524971] [ 568] 0 568 772 231 7 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524973] [ 764] 0 764 712 103 3 0 0 upstart-socket- Oct 25 07:28:34 nldedip4k031 kernel: [87976.524976] [ 772] 103 772 815 164 2 0 0 dbus-daemon Oct 25 07:28:34 nldedip4k031 kernel: [87976.524979] [ 785] 0 785 1671 600 1 -17 -1000 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524981] [ 809] 101 809 7766 380 1 0 0 rsyslogd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524983] [ 869] 0 869 1158 213 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524986] [ 873] 0 873 1158 214 6 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524988] [ 911] 0 911 1158 215 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524990] [ 912] 0 912 1158 214 2 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524992] [ 914] 0 914 1158 213 1 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524995] [ 916] 0 916 618 86 1 0 0 atd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524997] [ 917] 0 917 655 226 3 0 0 cron Oct 25 07:28:34 nldedip4k031 kernel: [87976.524999] [ 948] 0 948 902 159 5 0 0 irqbalance Oct 25 07:28:34 nldedip4k031 kernel: [87976.525002] [ 993] 0 993 1145 363 3 0 0 master Oct 25 07:28:34 nldedip4k031 kernel: [87976.525004] [ 1002] 104 1002 1162 333 1 0 0 qmgr Oct 25 07:28:34 nldedip4k031 kernel: [87976.525007] [ 1016] 0 1016 730 149 2 0 0 mdadm Oct 25 07:28:34 nldedip4k031 kernel: [87976.525009] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525012] [ 1086] 0 1086 1158 213 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.525014] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525017] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525019] [ 1090] 33 1090 6175 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525021] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525024] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525026] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525029] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525031] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr Oct 25 07:28:34 nldedip4k031 kernel: [87976.525033] [ 2475] 0 2475 2435 812 0 0 0 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.525036] [ 2494] 0 2494 1745 839 1 0 0 bash Oct 25 07:28:34 nldedip4k031 kernel: [87976.525038] [ 2573] 0 2573 3394 1689 3 0 0 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.525040] [ 2589] 0 2589 5014 457 3 0 0 rsync Oct 25 07:28:34 nldedip4k031 kernel: [87976.525043] [ 2590] 0 2590 7970 522 1 0 0 rsync Oct 25 07:28:34 nldedip4k031 kernel: [87976.525045] [ 2652] 104 2652 1150 326 5 0 0 pickup Oct 25 07:28:34 nldedip4k031 kernel: [87976.525048] [ 2847] 0 2847 709 89 0 0 0 upstart-udev-br Oct 25 07:28:34 nldedip4k031 kernel: [87976.525050] Out of memory: Kill process 764 (upstart-socket-) score 1 or sacrifice child Oct 25 07:28:34 nldedip4k031 kernel: [87976.525484] Killed process 764 (upstart-socket-) total-vm:2848kB, anon-rss:204kB, file-rss:208kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.526161] init: upstart-socket-bridge main process (764) killed by KILL signal Oct 25 07:28:34 nldedip4k031 kernel: [87976.526180] init: upstart-socket-bridge main process ended, respawning Oct 25 07:28:44 nldedip4k031 kernel: [87986.439671] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439674] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439676] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:44 nldedip4k031 kernel: [87986.439678] Call Trace: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439684] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439686] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439688] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439691] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439694] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439696] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439699] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439702] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439704] [] vfs_read+0x8c/0x160 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439707] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439709] [] sys_read+0x3d/0x70 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439712] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] Mem-Info: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] DMA per-cpu: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439716] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439717] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439718] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439719] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439720] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439721] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439722] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439723] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439724] Normal per-cpu: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439725] CPU 0: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439726] CPU 1: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439727] CPU 2: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439728] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439729] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:33:48 nldedip4k031 kernel: imklog 5.8.6, log source = /proc/kmsg started. Oct 25 07:33:48 nldedip4k031 rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="2880" x-info="http://www.rsyslog.com"] start Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's groupid changed to 103 Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's userid changed to 101 Oct 25 07:33:48 nldedip4k031 rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ]

    Read the article

  • jQuery memory game: if $('.opened').length; == number run function.

    - by Carl Papworth
    So I'm trying to change the a.heart when there is td.opened == 24. I'm not sure what's going wrong though since nothings happening. HTML: <body> <header> <div id="headerTitle"><a href="index.html">&lt;html<span class="heart">&hearts;</span>ve&gt;</a> </div> <div id="help"> <h2>?</h2> <div id="helpInfo"> <p>How many tiles are there? Let's see [calculating] 25...</p> </div> </div> </header> <div id="reward"> <div id="rewardContainer"> <div id="rewardBG" class="heart">&hearts; </div> <p>OMG, this must be luv<br><a href="index.html" class="exit">x</a></p> </div> </div> <div id="pageWrap"> <div id="mainContent"> <!-- DON'T BE A CHEATER !--> <table id="memory"> <tr> <td class="pair1"><a>&Psi;</a></td> <td class="pair2"><a>&para;</a></td> <td class="pair3"><a>&Xi;</a></td> <td class="pair1"><a>&Psi;</a></td> <td class="pair4"><a >&otimes;</a></td> </tr> <tr> <td class="pair5"><a>&spades;</a></td> <td class="pair6"><a >&Phi;</a></td> <td class="pair7"><a>&sect;</a></td> <td class="pair8"><a>&clubs;</a></td> <td class="pair4"><a>&otimes;</a></td> </tr> <tr> <td class="pair9"><a>&Omega;</a></td> <td class="pair2"><a>&para;</a></td> <td id="goal"> <a href="#reward" class="heart">&hearts;</a> </td> <td class="pair10"><a>&copy;</a></td> <td class="pair9"><a>&Omega;</a></td> </tr> <tr> <td class="pair11"><a>&there4;</a></td> <td class="pair8"><a>&clubs;</a></td> <td class="pair12"><a>&dagger;</a></td> <td class="pair6"><a>&Phi;</a></td> <td class="pair11"><a>&there4;</a></td> </tr> <tr> <td><a class="pair12">&dagger;</a></td> <td><a class="pair5">&spades;</a></td> <td><a class="pair10">&copy;</a></td> <td><a class="pair3">&Xi;</a></td> <td><a class="pair7">&sect;</a></td> </tr> </table> <!-- DON'T BE A CHEATER !--> </div> </div> <!-- END Page Wrap --> <footer> <div class="heartCollection"> <p>collect us if u need luv:<p> <ul> <li><a id="collection1">&hearts;</a></li> <li><a id="collection2">&hearts;</a></li> <li><a id="collection3">&hearts;</a></li> <li><a id="collection4">&hearts;</a></li> <li><a id="collection5">&hearts;</a></li> <li><a id="collection6">&hearts;</a></li> </ul> </div> <div class="credits">with love from Popm0uth ©2012</div> </footer> </body> </html> Javascript: var thisCard = $(this).text(); var activeCard = $('.active').text(); var openedCards = $('.opened').length; $(document).ready(function() { $('a.heart').css('color', '#CCCCCC'); $('a.heart').off('click'); function reset(){ $('td').removeClass('opened'); $('a').removeClass('visible'); $('td').removeClass('active'); }; $('td').click(openCard); function openCard(){ $(this).addClass('opened'); $(this).find('a').addClass('visible'); if ($(".active")[0]){ if ($(this).text() != $('.active').text()) { setTimeout(function(){ reset(); }, 1000); } else { $('.active').removeClass('active'); } } else { $(this).addClass("active"); } if (openedCards == 24){ $(".active").removeClass("active"); $("a.heart").css('color', '#ff63ff'); $("a.heart").off('click'); } } });

    Read the article

  • Can rel="next" and rel="prev" can be ignored in blog listings

    - by Saahil Sinha
    We have a blog - which is current spread to 9 pages, every page has a unique title - page 1, page 2, page 3 and so on. Also, as it's a blog, every page has unique 10 listing entry on one page. Is rel="prev" and rel="next" can be safely ignored as all these are listing and not content pages of article. What I read in all through Google Search is that rel="next" and rel="prev" should be applicable on where the content is spread across multiple pages. But - as it's a blog, it has blog listings and every listing has unique content This is the blog: http://www.mycarhelpline.com/index.php?option=com_easyblog&view=latest&Itemid=91. May recommend, if by ignoring rel="next" and rel="prev" - are we inviting Google to treat the blog listing pages as duplicate.

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Does Google consider my blog page as duplicate page if that page URL and that page URL with ‘showcomment’ cached separately?

    - by John Sanjay
    While I’m searching all the index page of my blog I found that Google cached one of my blog page http://example.com/page.html as well as http://example.com/page.html?showComment=1372054729698 These two pages are showing while I searched site:http://example.com. I’m so afraid about it because these two pages are same with same content. Does google consider these two pages as duplicate? If so what can I do now? Is it really a big problem to my blog?

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • How to find Microsoft.SharePoint.ApplicationPages.dll and some other assemblies

    - by KunaalKapoor
    You may be wondering where to find Microsoft.SharePoint.ApplicationPages.dll , if you are creating a new SharePoint application page? But don’t worry, it resides in _app_bin folder of your SharePoint site’s virtual directory.Assuming your IIS inetpub is at C then the exact path of Microsoft.SharePoint.ApplicationPages.dll isC:\Inetpub\wwwroot\wss\VirtualDirectories\<Your Virtual Server>\_app_bin\Microsoft.SharePoint.ApplicationPages.dllHere is the full list of assemblies at _app_bin folder:Microsoft.Office.DocumentManagement.Pages.dllMicrosoft.Office.officialfileSoap.dllMicrosoft.Office.Policy.Pages.dllMicrosoft.Office.SlideLibrarySoap.dllMicrosoft.Office.Workflow.Pages.dllMicrosoft.Office.WorkflowSoap.dllMicrosoft.SharePoint.ApplicationPages.dllSTSSOAP.DLL

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • IBM System x3850 X5 TPC-H Benchmark

    - by jchang
    IBM just published a TPC-H SF 1000 result for their x3850 X5 , 4-way Xeon 7560 system featuring a special MAX5 memory expansion board to support 1.5TB memory. In Dec 2010, IBM also published a TPC-H SF1000 for their Power 780 system, 8-way, quad-core, (4 logical processors per physical core). In Feb 2011, Ingres published a TPC-H SF 100 on a 2-way Xeon 5680 for their VectorWise column-store engine (plus enhancements for memory architecture, SIMD and compression). The figure table below shows TPC-H...(read more)

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • How to fix some damages from site hack?

    - by Towhid
    My site had been hacked. I found vulnerability,fixed it, and removed shell scripts. But hacker had uploaded thousands of web pages on my web server. after I removed those pages I got over 4 thousand "Not Found" Pages on my site(All linked from an external free domain and host which is removed now). Also hundreds of Keywords had been added to my site. after 3 weeks I can still see keywords from removed pages on my Google Webmaster Tools. I had 1st result on google search for certain keywords but now I am on 3rd page for the same keywords. 50% of my traffic was from google which is now reduced to 6%. How can I fix both those "Not Found" pages problem and new useless keywords? and Will it be enough to get me back on first result on google? P.S: 1)Both vulnerability and uploaded files are certainly removed. 2)My site is not infected, checked on google webmaster and a few other security web scan tools. 3) all files had been uploaded on one directory so i got something like site.com/hacked/page1.html and site.com/hacked/webpage2.html

    Read the article

  • keybinding issues with xmodmap across synergy

    - by Rick
    I've got two systems I use across Synergy. On the main one I have a normal keyboard that I swap caps lock and ctrl for. So I do: xmodmap -e 'keycode 66 = Control_L' xmodmap -e 'clear lock' xmodmap -e 'add Control = Control_L' Where keycode 66 is my caps lock key. The trouble is that I can't get this key to act as a control key on the other machine I connect to with synergy. The strange thing is that if I plug a keyboard into the machine, and run xev, the control key there is keycode 37. When I then hit my modified control key (keycode 66 on the master) it's registering as keycode 37 on the remote machine. So according to xev, it should be picking it up as a control keypress. Anyone have any hints on if Synergy is doing something overly helpful for me?

    Read the article

  • How to control admission policy in vmware HA?

    - by John
    Simple question, I have 3 hosts running 4.1 Essentials Plus with vmware HA. I tried to create several virtual machines that filled 90% of each server's memory capacity. I know that vmware has really sophisticated memory management within virtual machines, but I do not understand how the vCenter can allow me even to power on the virtual machines that exceed the critical memory level, when the host failover can be still handled. Is it due to the fact that virtual machines does not use the memory, so that it is still considered as free, so virtual machines can be powered on ? But what would happen if all VMs would be really using the RAM before the host failure - they could not be migrated to other hosts after the failure. The default behaviour in XenServer is that, it automatically calculates the maximum memory level that can be used within the cluster so that the host failure is still protected. Vmware does the same thing ? The admission policy is enabled. Vmware HA enabled.

    Read the article

  • How Microsoft Lost the API War - by Joel Spolsky

    - by TechTwaddle
    Came across another gem of an article by Joel Spolsky. It's a pretty old article written in June of 2004, has lot of tidbits and I really enjoyed reading it, so much in fact that I read it twice! So hit the link below and give it a read if you haven't already, How Microsoft Lost the API War - Joel Spolsky excerpt, "I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it."

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >