Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 104/539 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Revision Control For Windows CE

    - by Nathan Campos
    I have a HP Jornada 720 with Windows CE 3, called Handheld PC 2000. And as a good developer, I want to turn it into a fully-featured Scheme development environment. I already have Pocket Scheme on it, but now I need a revision control for my pocket development environment. Then I want to know: Where I can get it?

    Read the article

  • VMware ESXi - varying CPU time (CPU reservation)

    - by Tomo
    Hello! I'm running FreeBSD 7.2 under VMware ESXi 3.5. Host has 2 physical CPUs and the BSD box is currently the only running VM. Only one virtual CPU is assigned to the VM. When measuring CPU time of a specific program, I get very different results from time to time. Processor usage is reported differently by VMware, based on the system load. Is it possible to assign a constant share of a physical CPU to specific VM? I would like the CPU time to be more or less much constant. I tried setting CPU reservation when configuring VM in the VMware Infrastructure Client, but the CPU time still varies a lot. Thanks in advance!

    Read the article

  • RegQueryValueEx function fails On Windows7

    - by jitesh
    Hi All, I have a dll made in cpp which tries to read/write some Registry Keys. This code runs perfectly fine in Windows XP (32 bit environment) but it fails in Windows 7(64 bit environment). The registry keys which this application accesses is the shared registry keys. These keys are not part of 32 bit registry cache(wow32 bit) or 64 bit registry cache. Please provide your valuable inputs on this. Thanks in advance. Jits

    Read the article

  • Remote Control software for Mac

    - by MarqueIV
    One of the things I like about Microsoft's RDC Client is that the resolution of the experience is set by the client and not, say, a physical monitor connected to the host, as is the case with VNC; the latter being the protocol used by Mac. This means that even though I'm connecting to a notebook with a 1280x800 physical resolution, via RDC I can run it at 2560x1600 on my 30" monitor. However, that only seems to work for RDC. Does anyone know of something I can run on the Mac that will allow me to remotely control it at a different resolution than what is physically set? TIA, Mark

    Read the article

  • Open vSwitch and Xen Private Networks

    - by Joe
    I've read about the possibilities of using Open vSwitch with Xen to route traffic between domUs on multiple physical hosts. I'd like to be able to group the multiple domUs I have spread out across multiple physical hosts into a number of private networks. However, I've found no documentation on how to integrate Open vSwitch with Xen (rather than XenServer) and am unsure how I should go about doing so and then creating the private networks described. As you might have gathered then - from research I think Open vSwitch can do what I need it to, but I just can't find anything giving me a push in the right direction of how to actually use it to do so! This may well be because Open vSwitch is quite new (version 1.0 released on May 17). Any pointers in the right direction would be much appreciated!

    Read the article

  • What are some good resources for learning C beyond K&R

    - by Roland
    Hi, I'm currently reaching the end of working through the K&R book "The C Programming Language", and I'm looking for things to read next. Any recommendations of: blogs more detailed / advanced books (I've already stuck Advanced Programming in a Unix Environment on my Safari bookshelf) open source code (beginner-friendly and with good C idioms or style) to read up on next would be gratefully appreciated. To be specific, I'm most interested in things relevant to a *nix environment rather than Windows. Cheers, Roland

    Read the article

  • ASP.NET Web Service Throws 401 (unauthorized) Error

    - by user268611
    Hi Experts, I have this .NET application to be run in an intranet environment. It is configured so that it requires Windows Authentication before you can access the website (Anonymous access is disabled). This website calls a web service (enable anonymous access) and the web service calls the DB. We do have a token-based authentication between the web application and the web service to secure the communication between them. The issue I'm facing is that when I deploy this to production, I'm having an intermittent issue whereby the communication between the web application and the web service failed. The 401 issue was thrown. This is actually working fine in our QA environment. Is this an issue with Active Directory? Or could it be an isssue with FQDN as mentioned here: http://support.microsoft.com/default.aspx?scid=kb;en-us;896861? The weirdest thing is that this is happening intermittently when tested in both on the server itself and a remote workstation in my client's environment. But, this is working perfectly in my environment. OS: Windows Server SP1 IIS 6 .NET 3.5 Framework Any idea about the 401 (Unauthorized) issue?? Thx for the help... This is from the log... Event code: 3005 Event message: An unhandled exception has occurred. Event time: 4/5/2010 10:44:57 AM Event time (UTC): 4/5/2010 2:44:57 AM Event ID: 6c8ea2607b8d4e29a7f0b1c392b1cb21 Event sequence: 155112 Event occurrence: 2 Event detail code: 0 Application information: Application domain: xxx Trust level: Full Application Virtual Path: xxx Application Path: xxx Machine name: xxx Process information: Process ID: 4424 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: WebException Exception message: The request failed with HTTP status 401: Unauthorized. Request information: Request URL: http://[ip]/[app_path] Request path: xxx User host address: [ip] User: xxx Is authenticated: True Authentication Type: Negotiate Thread account name: xxx Thread information: Thread ID: 6 Thread account name: xxx Is impersonating: False Stack trace: at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at wsVulnerabilityAdvisory.Service.test() at test.Page_Load(Object sender, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • H12 timeout error on Heroku

    - by snowangel
    Can anyone shed some light on what's causing this timeout error on Heroku (at 2012-07-08T08:58:33+00:00)? The docs say that it's because of some long running process. I've set config.assets.initialize_on_precompile = false in config/application.rb. EmBP-2:bc Emma$ heroku restart Restarting processes... done EmBP-2:bc Emma$ heroku logs --tail 2012-07-08T08:47:21+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.1" 200 311723 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:47:21+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.0" 200 1311615 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:51:32+00:00 heroku[slugc]: Slug compilation started 2012-07-08T08:54:05+00:00 heroku[api]: Release v145 created by [email protected] 2012-07-08T08:54:05+00:00 heroku[api]: Deploy 8814b2f by [email protected] 2012-07-08T08:54:05+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:54:06+00:00 heroku[slugc]: Slug compilation finished 2012-07-08T08:54:09+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 22429 -c ./config/unicorn.rb` 2012-07-08T08:54:10+00:00 app[worker.1]: [Worker(host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1)] Exiting... 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.320616 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376765 #1] INFO -- : master complete 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376272 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011695 #1] INFO -- : worker=0 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011386 #1] INFO -- : listening on addr=0.0.0.0:22429 fd=3 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.017917 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.019309 #1] INFO -- : master process ready 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.018250 #5] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.016768 #1] INFO -- : worker=1 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020863 #8] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020617 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:54:12+00:00 app[worker.1]: SQL (2.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1') 2012-07-08T08:54:12+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:54:13+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:54:14+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:54:20+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:54:20+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:47+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.467693 #8] INFO -- : worker=1 ready 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.823800 #5] INFO -- : worker=0 ready 2012-07-08T08:54:48+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:54:48+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:54:48+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1 2012-07-08T08:54:48+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:49+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Starting job worker 2012-07-08T08:57:54+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:57:56+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047386 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047753 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047999 #1] INFO -- : master complete 2012-07-08T08:57:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:58+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:57:58+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Exiting... 2012-07-08T08:57:59+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 29766 -c ./config/unicorn.rb` 2012-07-08T08:58:01+00:00 app[worker.1]: SQL (27.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1') 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070527 #1] INFO -- : listening on addr=0.0.0.0:29766 fd=3 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070782 #1] INFO -- : worker=0 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.074498 #1] INFO -- : worker=1 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.075702 #1] INFO -- : master process ready 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076732 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076957 #5] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089022 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089299 #8] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:58:02+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:58:10+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:58:11+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:33+00:00 heroku[router]: Error H12 (Request timeout) -> GET codicology.co.uk/ dyno=web.1 queue= wait= service=30000ms status=503 bytes=0 2012-07-08T08:58:33+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.0" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:33+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.1" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:42+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:58:42+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1 2012-07-08T08:58:42+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:58:42+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:58:43+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] Starting job worker 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:24+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:25+00:00 app[web.1]: I, [2012-07-08T08:59:25.555052 #5] INFO -- : worker=0 ready 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: Started GET "/" for 82.69.50.215 at 2012-07-08 08:59:25 +0000 2012-07-08T08:59:26+00:00 app[web.1]: Processing by PagesController#home as HTML 2012-07-08T08:59:26+00:00 app[web.1]: I, [2012-07-08T08:59:26.043501 #8] INFO -- : worker=1 ready 2012-07-08T08:59:26+00:00 app[web.1]: Rendered pages/home.html.haml within layouts/application (5.7ms) 2012-07-08T08:59:26+00:00 app[web.1]: (1.1ms) SELECT COUNT(*) FROM "delayed_jobs" 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_header.html.erb (4.2ms) 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_footer.html.haml (1.4ms) 2012-07-08T08:59:26+00:00 app[web.1]: Completed 200 OK in 326ms (Views: 258.4ms | ActiveRecord: 65.2ms)

    Read the article

  • How to understand the LSI HBA connector specs?

    - by Sandra
    When reading the specifications for the LSi SAS 9206-16e HBA, it says Storage Connectivity; Data Transfer Rates * 16 ports; 6Gb/s SAS 2.1 compliant SAS Bandwidth * Half Duplex 2400MB/s, x4, 6Gb/s SAS lanes Port Configurations * 16 ea, x1 ports (individual drives) * 4 ea, x4 wide ports * 2ea, x8 wide ports Connectors * Four (x4) mini-SAS HD external connectors (SFF8644) So there are 4 physical connectors. Question What is the bandwidth for each of the connectors? I would be temped to say 6Gb/s * 4, but then it mentions the "Port Configurations" and 2ea, 4ea, 16ea, which I don't understand what is. Does this mean, that the 4 physical connectors are not identical?

    Read the article

  • Does the Win XP/7 dual boot "missing restore points" problem apply to systems with separate hard disks for each O/S?

    - by Robert Oschler
    I'm in the process of installing Windows 7/64 on a system with Windows XP/32 on it. During my research, I read about a problem that occurs in the dual boot scenario where Windows XP deletes Windows 7's restore points when it accesses the Windows 7 volume: http://support.microsoft.com/kb/926185 I found a workaround but it seems pretty painful since it appears to involve using the registry to make the Windows 7 volume appear invisible or "offline" to Windows XP, making sharing disk data between the two O/S annoying since you have to use something like an external storage device to get it done: http://www.vistax64.com/tutorials/127417-system-restore-points-stop-xp-dual-boot-delete.html I was wondering if this problem only occurs with systems that have both O/S installed on the same physical hard drive (in different partitions)? In my case, I will have each O/S on a completely separate physical hard drive. Any other tips would be appreciated. -- roschler

    Read the article

  • Visual Studio 2005 and Windows SDK 6.1 (Server 2008)

    - by bde
    I am trying to figure out how Visual Studio 2005 and the Windows SDK 6.1 integrate in a command line build environment (if at all). I am mostly interested in x64 development, but also just in how these two packages fit together. Is it possible/advisable to use the compiler and linker from Visual Studio 2005 and the headers/libraries from the newer Windows SDK 6.1? Also, is it possible to use the devenv command (part of VS2005) in the Windows SDK 6.1 build environment?

    Read the article

  • All Xen domU LVM volumes corrupt after reboot

    - by zcs
    I'm running a Debian Squeeze dom0, and after rebooting it all 7 of my domUs have data corruption. Each is setup as ext3 partition directly on a separate lvm2 volume. None of the lvm volumes will mount; all have bad superblocks. I've tried e2fsck with each superblock to no avail. What else can I try? Each domU has two LVM volumes connected to it, one for the disk and one for swap. The disk is mounted at root, formatted as a normal ext3 partition as a xen-blk device. The volumes are never mounted outside of the guest OS. I'm running Ubuntu 11.04 using the instructions here. I'm not sure that they didn't shutdown properly, all I know is they were corrupt after I issues a clean 'reboot' on the dom0. Here's a sample Xen config file; the rest are the same except for name, vcpus, memory, vif and disk. name = 'load1' vcpus = 2 memory = 512 vif = ['bridge=prbr0', 'bridge=eth0'] disk = ['phy:/dev/VolGroup00/load1-disk,xvda,w','phy:/dev/VolGroup00/load1-swap,xvdb,w'] #============================================================================ # Debian Installer specific variables def check_bool(name, value): value = str(value).lower() if value in ('t', 'tr', 'tru', 'true'): return True return False global var_check_with_default def var_check_with_default(default, var, val): if val: return val return default xm_vars.var('install', use='Install Debian, default: false', check=check_bool) xm_vars.var("install-method", use='Installation method to use "cdrom" or "network" (default: network)', check=lambda var, val: var_check_with_default('network', var, val)) # install-method == "network" xm_vars.var("install-mirror", use='Debian mirror to install from (default: http://archive.ubuntu.com/ubuntu)', check=lambda var, val: var_check_with_default('http://archive.ubuntu.com/ubuntu', var, val)) xm_vars.var("install-suite", use='Debian suite to install (default: natty)', check=lambda var, val: var_check_with_default('natty', var, val)) # install-method == "cdrom" xm_vars.var("install-media", use='Installation media to use (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-cdrom-device", use='Installation media to use (default: xvdd)', check=lambda var, val: var_check_with_default('xvdd', var, val)) # Common options xm_vars.var("install-arch", use='Debian mirror to install from (default: amd64)', check=lambda var, val: var_check_with_default('amd64', var, val)) xm_vars.var("install-extra", use='Extra command line options (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-installer", use='Debian installer to use (default: network uses install-mirror; cdrom uses /install.ARCH)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-kernel", use='Debian installer kernel to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-ramdisk", use='Debian installer ramdisk to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.check() if not xm_vars.env.get('install'): bootloader="/usr/sbin/pygrub" elif xm_vars.env['install-method'] == "network": import os.path print "Install Mirror: %s" % xm_vars.env['install-mirror'] print "Install Suite: %s" % xm_vars.env['install-suite'] if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = xm_vars.env['install-mirror']+"/dists/"+xm_vars.env['install-suite'] + \ "/main/installer-"+xm_vars.env['install-arch']+"/current/images" print "Installer: %s" % installer print print "WARNING: Installer kernel and ramdisk are not authenticated." print if xm_vars.env.get('install-kernel'): kernelurl = xm_vars.env['install-kernel'] else: kernelurl = installer + "/netboot/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskurl = xm_vars.env['install-ramdisk'] else: ramdiskurl = installer + "/netboot/xen/initrd.gz" import urllib class MyUrlOpener(urllib.FancyURLopener): def http_error_default(self, req, fp, code, msg, hdrs): raise IOError("%s %s" % (code, msg)) urlopener = MyUrlOpener() try: print "Fetching %s" % kernelurl kernel, _ = urlopener.retrieve(kernelurl) print "Fetching %s" % ramdiskurl ramdisk, _ = urlopener.retrieve(ramdiskurl) except IOError, _: raise elif xm_vars.env['install-method'] == "cdrom": arch_path = { 'i386': "/install.386", 'amd64': "/install.amd" } if xm_vars.env['install-media']: print "Install Media: %s" % xm_vars.env['install-media'] else: raise OptionError("No installation media given.") if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = arch_path[xm_vars.env['install-arch']] print "Installer: %s" % installer if xm_vars.env.get('install-kernel'): kernelpath = xm_vars.env['install-kernel'] else: kernelpath = installer + "/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskpath = xm_vars.env['install-ramdisk'] else: ramdiskpath = installer + "/xen/initrd.gz" disk.insert(0, 'file:%s,%s:cdrom,r' % (xm_vars.env['install-media'], xm_vars.env['install-cdrom-device'])) bootloader="/usr/sbin/pygrub" bootargs="--kernel=%s --ramdisk=%s" % (kernelpath, ramdiskpath) print "From CD" else: print "WARNING: Unknown install-method: %s." % xm_vars.env['install-method'] if xm_vars.env.get('install'): # Figure out command line if xm_vars.env['install-extra']: extras=[xm_vars.env['install-extra']] else: extras=[] # Reboot will just restart the installer since this file is not # reparsed, so halt and restart that way. extras.append("debian-installer/exit/always_halt=true") extras.append("--") extras.append("quiet") console="hvc0" try: if len(vfb) >= 1: console="tty0" except NameError, e: pass extras.append("console="+ console) extra = str.join(" ", extras) print "command line is \"%s\"" % extra root There are two LVM logical volumes connected to each VM. Here's the fdisk -l output for the disk volume: Disk /dev/VolGroup00/VMNAME-disk: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029c01 Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-disk1 1 1045 8386560 83 Linux And the swap volume: Disk /dev/VolGroup00/VMNAME-swap: 536 MB, 536870912 bytes 37 heads, 35 sectors/track, 809 cylinders Units = cylinders of 1295 * 512 = 663040 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004faae Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-swap1 2 809 522240 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(1, 21, 19) Partition 1 has different physical/logical endings: phys=(65, 36, 35) logical=(808, 4, 28)

    Read the article

  • Cross-submission robots.txt for multiple domains on single host

    - by sidd.darko
    We are running a site with multiple languages hosted in a single environment on IIS7. For example, oursite.com - english oursite.de - german oursite.es - spanish This is a single-host environment. All of these sites are in the same application space on the same physical machine. I need to do cross-submission of sitemaps via robots.txt. Looking at the sitemap.org guidelines for this suggest this is possible, but the example indicates different physical machines. Will the following entries in oursite.com/robots.txt work? http://www.oursite.com/sitemap-oursite-de.xml http://www.oursite.com/sitemap-oursite-es.xml

    Read the article

  • how to find perl has installed in a system

    - by abubacker
    I have written a perl script , I just want to give it to every one , for that I planned to write a bash script which is used to test the environment of a user and find whether that environment is capable of running the perl script. I want to test the things like o. Whether perl has installed in that system o. Perl should have the version 5 or more o. Whether the module JSON::Any is available Any suggestion would greatly appreciated :-)

    Read the article

  • Linux SW Raid: whole disk or per-partition?

    - by Steve Pomeroy
    I have inherited a machine which has 2 physical disks and uses Linux SW RAID(1). Both disks are partitioned and are are all individual arrays (/dev/md0, /dev/md6, etc.). Those arrays are then mounted (/boot, /home, etc. even /tmp). As RAID is designed to mitigate physical failures, is there any reason why one would use this technique over whole-disk arrays that are then partitioned (perhaps using LVM)? This seems prone to more potential issues, but may have some special properties that I haven't been able to glean. I'm planning on moving this setup to: disks?SWRAID(1)?LVM as I'll be making multiple VMs out of the one machine, but wanted to make sure I knew what I was doing when I got rid of the old setup.

    Read the article

  • USB Thumb Drive not recognized in Hyper-V Manager

    - by Vazgen
    I have the free, standalone core Hyper-V Server 2012 running on my physical machine. I set up remote management from my Windows 8 client. When I proceed to create a virtual machine I would like to install the OS from a usb thumb drive but it is not recognized in Hyper-V Manager on my client (when the USB is plugged into the physical server) nor is it recognized in Server Manager under File and Storage Services Volumes Is there a role needed to recognize external usb flash drives? Because I think this standalone version is just core Hyper-V role and that's it... but this is such a basic functionality. Can anybody comment.

    Read the article

  • Scroll Wheel in Delphi 7 with CodeRush

    - by GM Mugford
    One casualty of my brief dalliance with Delphi 2010 and subsequent return to using Delphi 7 was my acceptance of the wheel-scrolling behaviour in Delphi 7 with CodeRush installed. The scroll wheel scrolls horizontally in that environment, which I've accepted for all these many years. But it sure was nice to have vertical scrolling while in D2010. Now, I wonder if there is any 'fix' to achieve the 'natural' scroll direction in my environment. Any old CodeRush/D7 users out there with a solution?

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • Cannot Install JDK

    - by Vince
    For the life of me, I can't install the JDK on Windows Vista. I keep getting the error, "This Software Has Already Been Installed on Your Computer. Would you like to reinstall it?" Problem is, it's evidently not on my computer, since I can't a) run Eclipse - I get "Could Not Find Java SE Runtime Environment" or b) Find any reference to Java from the command line when typing java -version - I get "Error opening registry key 'Registry/JavaSoft/Java Runtime Environment." Any ideas?

    Read the article

  • Hide non VHD partitions

    - by James
    Hey! I have two partition on my HDD: C and D. On D: I have a vhd image (with windows 7 ultimate) that I use to boot from. When I'm running the OS from VHD I can still access my physical parititons. Is it possible to dismount them, in order to see just the virtual disks in the VHD OS? I tried with the physical C and it says that I cannot dismount a boot paritition and I think D cannot be dismounted because the VHD is on it.

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Are there any FIPS-140-2 certified solutions for Linux?

    - by Mark Renouf
    I'm not even 100% certain what this involves, but my current understanding is this: use of only approved cryptographic algorithms for network traffic (easy, we use SSL and lock down the algorithms to only the really strong ones). Some form of physical data protection, involving disk encryption and physical tamper evident packaging. Obviously we're on our own if we need a tamper-proof product. But what about software for encrpytion. My guess is just using LUKS (although secure) will not be certified because it's open source (gov't seems a bit biased towards proprietary solutions here). Guardian Edge was mentioned by someone, but that appears to be complete Windows-based. So we need something like it, certified FIPS-140 compliant we can use on Linux.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >