Search Results

Search found 2309 results on 93 pages for 'caching'.

Page 55/93 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

  • Proliant server will not accept new hard disks in RAID 1+0?

    - by Leigh
    I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (68.5 GB, RAID 1, Interim Recovery Mode) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 73.5 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, Failed array B (SAS, Unused Space: 0 MB) logicaldrive 2 (558.7 GB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 300 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 300 GB, OK) ctrl all show config detail Smart Array P400 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PAFGK0P9VWO0UQ Cache Serial Number: PA82C0J9VWL8I7 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: E Firmware Version: 7.24 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Status Details: A cache error was detected. Run more information. Cache Ratio: 100% Read / 0% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Battery/Capacitor Count: 0 SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: Failed Physical Drive Array Type: Data One of the drives on this array have failed or has Logical Drive: 1 Size: 68.5 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 17594 Strip Size: 128 KB Full Stripe Size: 128 KB Status: Interim Recovery Mode Caching: Enabled Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive0 Mount Points: C:\ 68.5 GB Logical Drive Label: A0100539PAFGK0P9VWO0UQ0E93 Mirror Group 0: physicaldrive 2I:1:2 (port 2I:box 1:bay 2, S Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, S Drive Type: Data physicaldrive 2I:1:1 Port: 2I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 73.5 GB Rotational Speed: 10000 Firmware Revision: 0103 Serial Number: B379P8C006RK Model: HP DG072A9B7 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:2 Port: 2I Box: 1 Bay: 2 Status: Failed Drive Type: Data Drive Interface Type: SAS Size: 72 GB Rotational Speed: 15000 Firmware Revision: HPD9 Serial Number: D5A1PCA04SL01244 Model: HP EH0072FARUA PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data Logical Drive: 2 Size: 558.7 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Co Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive1 Mount Points: E:\ 558.7 GB Logical Drive Label: AF14FD12PAFGK0P9VWO0UQD007 Drive Type: Data physicaldrive 1I:1:5 Port: 1I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE07QH300009923X1X3 Model: HP DG0300BALVP Current Temperature (C): 32 Maximum Temperature (C): 45 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:3 Port: 2I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE0AHVH00009924P8F3 Model: HP DG0300BALVP Current Temperature (C): 34 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:4 Port: 2I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE08NAK00009924KWD6 Model: HP DG0300BALVP Current Temperature (C): 35 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Saturday, March 27, 2010

    CodePlex Daily Summary for Saturday, March 27, 2010New ProjectsAlter gear SQL index Management: SQL Index management displays a list of indexes available for the chosen database and allows you to select an individual / group of indexes to be r...ASP League Ladder System: An ASP ladder / league system for online gaming league or real life leagues also.Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator is a software suite to promote computer aided strategy planning. Sports team can visualize their strategy usin...Boo syntax highlighting for Visual Studio 2010: Simple syntax hightlighting VSX add-in for Boo language in Visual Studio 2010.easySan: easySan zur einfachen Mitgliedsverwaltung im BRKFsUnit: FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework.Laughing Dog XNA Framework: Laughing Dog is a simple to use, component based 2D framework for XNA game development. At present it is very early in development and as such is f...miniTodo: WPFでMVVMの練習にてきとうに作ったTODOアプリ 実用は無理です。My Common Library on .NET with CSharp: My Common Library on .NET with CSharp, it conclude database assecc, encrypt string, data caching, StringUtility, thank you for your view.Native code wrapping using c# : fsutil sparse commands: Ever thought about creating HUGE FILES for future use but felt bad for the wasted memory? Well, SPARSE FILES are the ANSWER! This FSUTIL SPARSE CO...Open SOA Platform: A centralized system for administering applications throught a SOA Enterprise Service Bus: Runtime environment (PROD, DEV, ...) , application and s...P-DBMS: Network and Database ProjectPraiseSight: PraiseSight is supposed to become a practical tool for churches to catalog an present their songs, lyrics and presentations on a beamer. The soluti...Pretty Good Frontend: Pretty Good Frontend is a sample frontend for ConfigMgr (SCCM) 2007 and MDT 2010 Zero Touch. S3Appender (Appender for Log4Net that Uses Amazon S3 For Storing Log Files): The S3Appender is a log4net appender that stores log events in either a MemoryStream or FileStream and sends them to S3 based on time intervals and...sEmit: sEmit (sms emitter) is an application written in C# which was built to send text messages. The project was founded in May 2009 by cansik. It works ...Silverlight RIA Tools: A tool set that generates a full RIA Solutions in Silverlightthommo cannon: Cannon for shooting down ThommosTianjin Polytechnic University Online Judge: Online Judge System Built on Microsoft technologies. Vision & Scope: A distributed OJ Solution on Windows and Cloud. Technologies used or planed...Tinare: Tinare is an byte encryption and decryption alogrithm. The input key is a string password.TinyPlug: Small Plugin Manager, written in C# Allows a project to define supported interfaces, and at runtime add plugins which support (inherit) these in...Utility niconv helps to convert text from one encoding to another: .NET implementation of GUN iconv console converter utility. The niconv program converts text from one encoding to another encoding. In the future r...WareFeed - Software Business Analytics: WareFeed is a simple but effective Software Business Analytics tool written in PHP and compatible others languages such as .NET, Java or Python. It...Y36API1: Semestralni projekt na Y36APINew ReleasesAlter gear SQL index Management: Setup 1.0.0: setup for first alpha releaseASP League Ladder System: ASPLeagueRelease_0_4_1: Release v 0.41Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator: Version 1.0 InstallerAutoAudit: AutoAudit 1.10e: Version 1.10e will be the final iteration of version 1 development. Version 2 will begin adding switches and options. Pleae email your suggestio...Boo syntax highlighting for Visual Studio 2010: Boo syntax VS 2010 - alpha: First release TODO: Multiline comments!Chargify.NET: Chargify.NET 0.6: Updated library, using Metered Components and updated Product information.Composer: V1.0.326.1000 Alpha: Initial Alpha release. Should be stable, with minor issues.CoNatural Components: CoNatural Components 1.6: Code fixes: Created helper classes to generate source code for type mapper/materializer. Fixed issue in optimized type materializer when loading ...CRM External View: 1.2: New Features in v1.2 release Password protected views. No more using Web Data Access role from v1. Filtering capabilities Caching for performan...Designit Video Embed Package: Release 1.1.0 beta1: You can now either have the video embeded directly in the template or have a preview in template that opens the video in a lightbox window.FsUnit: FsUnit 0.9.0 for NUnit: This release is for F# 2.0 and NUnit 2.5+.Laughing Dog XNA Framework: Laughing Dog 0.0.1: Laughing Dog - Alpla - v 0.0.1 First released version of the Laughing Dog framework.LiveUpload to Facebook: LiveUpload to Facebook 3.2: Version 3.2Become a fan on Facebook! Features Quickly and easily upload your photos and videos to Facebook, including any people tags added in Win...MapWindow6: MapWindow 6.0 msi March 26: This version adds the Join feature for creating a new "featureset" with attributes that are joined with attributes from a Excel data label named 'D...Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.2: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Multiplayer Quiz: Release 1_6_351_0: A beta release of the next version. Please leave any errors in discussions or comments.Native code wrapping using c# : fsutil sparse commands: Fsutil sparse file native code - c sharp wrapper: Project Description A C# code wrapping a native code-Sparse files1 The code is about SPARSE files- the abillity to create huge files (for future us...Nice Libraries: 1.30 build 50325.01: Release 1.30 build 50325.01Pretty Good Frontend: Pretty Good Frontend binaries v1.0: This is the first public release of the Pretty Good Frontend binariesPylor: Pylor 0.1 alpha: This is the very first published version. I hope I can put a sample project soon.Quick Performance Monitor: Version 1.1 refresh: There was a typo or two in the sample batch file. Corrected now.Rapidshare Episode Downloader: RED v0.8.3: 0.8.1 introduced the ability to advance to the next episode. In 0.8.2 a bug was found that if episode number is less then 10, then the preceding 0...RapidWebDev - .NET Enterprise Software Development Infrastructure: RapidWebDev 1.52: RapidWebDev is an infrastructure helps to develop enterprise software solutions in Microsoft .NET easily and productively. This is the release vers...thommo cannon: game: gamethommo cannon: setup: setupthommo cannon: test: testTinare: Tinare DLL: Tinare DLL is a dynamic-link library written in C# which provides the functions to encrypt and decrypt a byte stream with tinare.WeatherBar: WeatherBar 2.1 [No Installation]: Minor changes to release 2.0 (http://weatherbar.codeplex.com/releases/view/42490). Fixed the bug that caused an exception to be thrown if the user...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationFarseer Physics Enginepatterns & practices: Composite WPF and SilverlightLINQ to TwitterTable2ClassFluent Ribbon Control SuiteNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • compass-rails 1.03 - TypeError: can't convert nil into String

    - by Romiko
    I am running: ruby 1.9.3p392 (2013-02-22) [i386-mingw32] compass-rails 1.0.3 I used the Windows RailsInstaller to install Ruby on Rails Gemfile group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'compass-rails','~> 1.0.2' # See https://github.com/sstephenson/execjs#readme for more supported runtimes # gem 'therubyracer', :platforms => :ruby gem 'uglifier', '>= 1.0.3' end I am currently experiencing issues importing sprites. My sprites are in: assets/images/source in my _shared.scss file I have: //Sprites @import "./source/*.png"; $source-sprite-dimensions: true; In my application.scss I have: /* * This is a manifest file that'll be compiled into application.css, which will include all the files * listed below. * * Any CSS and SCSS file within this directory, lib/assets/stylesheets, vendor/assets/stylesheets, * or vendor/assets/stylesheets of plugins, if any, can be referenced here using a relative path. * * You're free to add application-wide styles to this file and they'll appear at the top of the * compiled file, but it's generally better to create a new file per style scope. * *= require_self */ @import "_shared.scss"; @import "baseline.scss"; @import "global.scss"; @import "normalize.scss"; @import "print.scss"; @import "desktop.scss"; @import "tablet.scss"; @import "home.css.scss"; I am also using rails server and not compass watcher. However when I browse to the page at localhost:3000/assets/application.css, I get the following error: body:before { font-weight: bold; content: "\000a TypeError: can't convert nil into String\000a (in c:\002f RangerRomOnRails\002f RangerRom\002f app\002f assets\002f stylesheets\002f desktop.scss)"; } body:after { content: "\000a C:\002f RailsInstaller\002f Ruby1.9.3\002f lib\002f ruby\002f gems\002f 1.9.1\002f gems\002f compass-0.12.2\002f lib\002f compass\002f sass_extensions\002f functions\002f image_size.rb:17:in `extname'"; } Here is the full stack trace: compass (0 .12.2) lib/compass/sass_extensions/functions/image_size.rb:17:in `extname' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:17:in `initialize' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:50:in `new' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:50:in `image_dimensions' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:4:in `image_width' sass (3.2.9) lib/sass/script/funcall.rb:112:in `_perform' sass (3.2.9) lib/sass/script/node.rb:40:in `perform' sass (3.2.9) lib/sass/tree/visitors/perform.rb:298:in `visit_prop' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:320:in `visit_rule' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:320:in `visit_rule' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:362:in `visit_media' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:128:in `visit_root' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:7:in `visit' sass (3.2.9) lib/sass/tree/root_node.rb:20:in `render' sass (3.2.9) lib/sass/engine.rb:315:in `_render' sass (3.2.9) lib/sass/engine.rb:262:in `render' sass-rails (3.2.6) lib/sass/rails/template_handlers.rb:106:in `evaluate' tilt (1.4.1) lib/tilt/template.rb:103:in `render' sprockets (2.2.2) lib/sprockets/context.rb:193:in `block in evaluate' sprockets (2.2.2) lib/sprockets/context.rb:190:in `each' sprockets (2.2.2) lib/sprockets/context.rb:190:in `evaluate' sprockets (2.2.2) lib/sprockets/processed_asset.rb:12:in `initialize' sprockets (2.2.2) lib/sprockets/base.rb:249:in `new' sprockets (2.2.2) lib/sprockets/base.rb:249:in `block in build_asset' sprockets (2.2.2) lib/sprockets/base.rb:270:in `circular_call_protection' sprockets (2.2.2) lib/sprockets/base.rb:248:in `build_asset' sprockets (2.2.2) lib/sprockets/index.rb:93:in `block in build_asset' sprockets (2.2.2) lib/sprockets/caching.rb:19:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/processed_asset.rb:111:in `block in resolve_dependencies' sprockets (2.2.2) lib/sprockets/processed_asset.rb:105:in `each' sprockets (2.2.2) lib/sprockets/processed_asset.rb:105:in `resolve_dependencies' sprockets (2.2.2) lib/sprockets/processed_asset.rb:97:in `build_required_assets' sprockets (2.2.2) lib/sprockets/processed_asset.rb:16:in `initialize' sprockets (2.2.2) lib/sprockets/base.rb:249:in `new' sprockets (2.2.2) lib/sprockets/base.rb:249:in `block in build_asset' sprockets (2.2.2) lib/sprockets/base.rb:270:in `circular_call_protection' sprockets (2.2.2) lib/sprockets/base.rb:248:in `build_asset' sprockets (2.2.2) lib/sprockets/index.rb:93:in `block in build_asset' sprockets (2.2.2) lib/sprockets/caching.rb:19:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/bundled_asset.rb:38:in `init_with' sprockets (2.2.2) lib/sprockets/asset.rb:24:in `from_hash' sprockets (2.2.2) lib/sprockets/caching.rb:15:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/environment.rb:78:in `find_asset' sprockets (2.2.2) lib/sprockets/base.rb:177:in `[]' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:126:in `asset_for' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:44:in `block in stylesheet_link_tag' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:43:in `collect' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:43:in `stylesheet_link_tag' app/views/layouts/application.html.erb:16:in `_app_views_layouts_application_html_erb___824639613_33845076' actionpack (3.2.13) lib/action_view/template.rb:145:in `block in render' activesupport (3.2.13) lib/active_support/notifications.rb:125:in `instrument' actionpack (3.2.13) lib/action_view/template.rb:143:in `render' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:59:in `render_with_layout' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:45:in `render_template' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:18:in `render' actionpack (3.2.13) lib/action_view/renderer/renderer.rb:36:in `render_template' actionpack (3.2.13) lib/action_view/renderer/renderer.rb:17:in `render' actionpack (3.2.13) lib/abstract_controller/rendering.rb:110:in `_render_template' actionpack (3.2.13) lib/action_controller/metal/streaming.rb:225:in `_render_template' actionpack (3.2.13) lib/abstract_controller/rendering.rb:103:in `render_to_body' actionpack (3.2.13) lib/action_controller/metal/renderers.rb:28:in `render_to_body' actionpack (3.2.13) lib/action_controller/metal/compatibility.rb:50:in `render_to_body' actionpack (3.2.13) lib/abstract_controller/rendering.rb:88:in `render' actionpack (3.2.13) lib/action_controller/metal/rendering.rb:16:in `render' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:40:in `block (2 levels) in render' activesupport (3.2.13) lib/active_support/core_ext/benchmark.rb:5:in `block in ms' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/benchmark.rb:295:in `realtime' activesupport (3.2.13) lib/active_support/core_ext/benchmark.rb:5:in `ms' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:40:in `block in render' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:83:in `cleanup_view_runtime' activerecord (3.2.13) lib/active_record/railties/controller_runtime.rb:24:in `cleanup_view_runtime' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:39:in `render' actionpack (3.2.13) lib/action_controller/metal/implicit_render.rb:10:in `default_render' actionpack (3.2.13) lib/action_controller/metal/implicit_render.rb:5:in `send_action' actionpack (3.2.13) lib/abstract_controller/base.rb:167:in `process_action' actionpack (3.2.13) lib/action_controller/metal/rendering.rb:10:in `process_action' actionpack (3.2.13) lib/abstract_controller/callbacks.rb:18:in `block in process_action' activesupport (3.2.13) lib/active_support/callbacks.rb:414:in `_run__956028316__process_action__416811168__callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `__run_callback' activesupport (3.2.13) lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:81:in `run_callbacks' actionpack (3.2.13) lib/abstract_controller/callbacks.rb:17:in `process_action' actionpack (3.2.13) lib/action_controller/metal/rescue.rb:29:in `process_action' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:30:in `block in process_action' activesupport (3.2.13) lib/active_support/notifications.rb:123:in `block in instrument' activesupport (3.2.13) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activesupport (3.2.13) lib/active_support/notifications.rb:123:in `instrument' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:29:in `process_action' actionpack (3.2.13) lib/action_controller/metal/params_wrapper.rb:207:in `process_action' activerecord (3.2.13) lib/active_record/railties/controller_runtime.rb:18:in `process_action' actionpack (3.2.13) lib/abstract_controller/base.rb:121:in `process' actionpack (3.2.13) lib/abstract_controller/rendering.rb:45:in `process' actionpack (3.2.13) lib/action_controller/metal.rb:203:in `dispatch' actionpack (3.2.13) lib/action_controller/metal/rack_delegation.rb:14:in `dispatch' actionpack (3.2.13) lib/action_controller/metal.rb:246:in `block in action' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:73:in `call' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:73:in `dispatch' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:36:in `call' journey (1.0.4) lib/journey/router.rb:68:in `block in call' journey (1.0.4) lib/journey/router.rb:56:in `each' journey (1.0.4) lib/journey/router.rb:56:in `call' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:612:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/best_standards_support.rb:17:in `call' rack (1.4.5) lib/rack/etag.rb:23:in `call' rack (1.4.5) lib/rack/conditionalget.rb:25:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/head.rb:14:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/params_parser.rb:21:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/flash.rb:242:in `call' rack (1.4.5) lib/rack/session/abstract/id.rb:210:in `context' rack (1.4.5) lib/rack/session/abstract/id.rb:205:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/cookies.rb:341:in `call' activerecord (3.2.13) lib/active_record/query_cache.rb:64:in `call' activerecord (3.2.13) lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/callbacks.rb:28:in `block in call' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `_run__360878605__call__248365880__callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `__run_callback' activesupport (3.2.13) lib/active_support/callbacks.rb:385:in `_run_call_callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:81:in `run_callbacks' actionpack (3.2.13) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/reloader.rb:65:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/remote_ip.rb:31:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/show_exceptions.rb:56:in `call' railties (3.2.13) lib/rails/rack/logger.rb:32:in `call_app' railties (3.2.13) lib/rails/rack/logger.rb:16:in `block in call' activesupport (3.2.13) lib/active_support/tagged_logging.rb:22:in `tagged' railties (3.2.13) lib/rails/rack/logger.rb:16:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/request_id.rb:22:in `call' rack (1.4.5) lib/rack/methodoverride.rb:21:in `call' rack (1.4.5) lib/rack/runtime.rb:17:in `call' activesupport (3.2.13) lib/active_support/cache/strategy/local_cache.rb:72:in `call' rack (1.4.5) lib/rack/lock.rb:15:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/static.rb:63:in `call' railties (3.2.13) lib/rails/engine.rb:479:in `call' railties (3.2.13) lib/rails/application.rb:223:in `call' rack (1.4.5) lib/rack/content_length.rb:14:in `call' railties (3.2.13) lib/rails/rack/log_tailer.rb:17:in `call' rack (1.4.5) lib/rack/handler/webrick.rb:59:in `service' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/httpserver.rb:138:in `service' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/httpserver.rb:94:in `run' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'

    Read the article

  • why does chkdsk always report errors on a bad shutdown

    - by rep_movsd
    Once in a while, Windows XP hangs on my laptop (usually when going into standby or hibernate and occasionally on startup) and I have to forcefully poweroff. Ususally chkdsk never runs automatically (I thought it should know that the partitions have nit been unmounted and do that). I religiously run chkdsk without /F after bad shutdowns like this, and invariably it reports that the drive has unfixed errors and must be checked with /F and I do that, and more often than not, the chdsk that runs on startup does not report fixing anything. I have had times in the past (and not only just on this system) when not running chkdsk leads to some strange errors like files not opening even though they exist and inability to save certain files, so I make it a point to always chkdsk after bad shutdown. I never understood why this is : Isnt the whole point of a journalling filesystem like NTFS to avoid file system corruption and endless chkdsks? I even tried once disabling write caching to see if it made any difference, but to no avail....

    Read the article

  • Samba Domain Controller corrupts Windows workstations profiles?

    - by MrZombie
    Oooooook, so here's my problem. I have a Mac OS X Server 10.5 to which Windows XP workstations are bound. I happened upon some errors and warnings in my log, from Userenv. Namely, error 1504, 1509. The warning complains that some setting on the share about offline caching. I found some guides to correct this if the problem was referring to a Windows server, but since those are Samba shares, the guide of course doesn't apply. Does anyone know what to do so that my profiles don't corrupt, and I still can use roaming profiles so that they're backed up by the server?

    Read the article

  • BIND authoritative name server: SERVFAIL?

    - by Luca Tettamanti
    I have a BIND 9.6 instance that acts as a caching NS for the whole building and is also authoritative for an internal zone ("example" below): zone "example" { type master; file "example"; update-policy { grant dhcp-update subdomain example. A TXT; }; }; Due to a rogue switch we lost connectivity with the rest of the world, and the NS started answering SERVFAIL; what surprised me was that the server was also unable to respond to queries for the example domain. What is the reason of this behavior? Shouldn't the NS be able to answer since it has authoritative data? edit: The rest of the configuration is the standard one shipped with Debian: hints for the root servers and the zones for localhost and broadcast.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • apache performance improvements and maxclients

    - by updog
    I know this has been asked a few (thousand) times around the internet but I was hoping someone whose in the know might be able to comment on my particular setup. I have a web server hosting one site (php/codeigniter) with a wordpress blog in a sub directory. The server has 2GB RAM, 3GHz CPU and I have offloaded the static assets to CloudFlare which is has reduced bandwidth for the actual server by almost 75%. The problem I have is when an email campaign is sent out that links to the site or blog, it slows down. Below is my settings in apache2.conf. Average apache process size is 80M and there is 1.5GB available for apache. <IfModule mpm_prefork_module> StartServers 8 MinSpareServers 5 MaxSpareServers 20 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> I have already setup and installed apc and built some caching into the site and used w3totalcache on the blog. The number of concurrent users is around 2-300 when there is a campaign, are there any further optimisations before

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

  • Squid site redirection

    - by AndyM
    I have an internal website that cannot be accessed from some machines on my network, due to the physical location, VPN ,network ranges etc. I would like to install Squid on "in between" network to forward request from the clients that cannot reach the website. The issue is the clients have no ability to connect to www.example.com , but they can reach a network with a squid proxy , which in turn can reach www.example.com What is the correct term I need to research in squid , is it just caching www.example.com or do I need to set the clients to use a URL that gets rewritten ? i.e www.squid-example.com -- www.example.com

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >