Search Results

Search found 18424 results on 737 pages for 'load balance'.

Page 572/737 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

  • Win7 Hangs During App Install/Upgrade/Uninstall

    - by JadeMason
    I have a custom built PC that intermittently hangs when installing, uninstalling, or upgrading applications. Technical Specs ASUS P5E w/ WiFi Motherboard Intel Core2 Quad Q6600 Processor 4x 2GB G.Skill DDR2 800 SDRAM ASUS EAH2900XT / Radeon HD 2900XT 512MB Video Card Under normal operation the machine runs reliably, even under heavy load, such as video transcoding. The temperature never gets anywhere near where I would worry about it. However, the machine regularly hangs (complete lockup, no response to keyboard or mouse, no activity on-screen) when either installing a new application, uninstalling an existing application, or applying patches to existing applications or the OS. This is extremely frustrating as this machine is primarily used as a HTPC. Several apps are configured for automatic updates, and these updates sometimes cause the machine to lockup while we are watching content on the PC. In previously investigating this issue, I found one likely problem could be my Logitech Webcam. The Logitech software has a bug that leaves an entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\SessionManager\PendingFileRenameOperations Which references the Temp directory. My registry contained this error, so I uninstalled the webcam software and deleted this registry key value. Unfortunately, the machine will still intermittently hang. I've noticed that the hangs always happen when an install/upgrade/uninstall requires elevated privileges (presumably to modify the registry). I can typically get at least one install/upgrade/uninstall to complete after a reboot, but after that it is a game of russian roulette to see if the operation will succeed or hang the machine. The event log is not helpful, as log messages end at the time of the hang, with no record of a warning or error. My only recourse when the machine hangs in this way is to perform a hard reset/power cycle. Any tips on how to further debug this issue are greatly appreciated.

    Read the article

  • Ubuntu inside VirtualBox is slow

    - by Kapsh
    I am running an Ubuntu instance on VirtualBox inside XP. Here are the details: Host: Windows XP Pro Guest: Ubuntu 8.10 Total RAM: 3GB RAM For VM: 1GB Total Video Memory: 128MB Video Memory for VM: 40MB Hard Drive: 200GB Hard Drive for VM: 30GB Processor: 2.80GHz Core Duo The problem is that whenever I am inside the virtual machine, things seem so much slower in general. For example Firefox, Eclipse take longer to load, dragging windows show a lag etc. I have tried running Ubuntu before (not inside a VM) and it seemed fantastically fast. So I am disappointed to have to deal with this situation. But I need access to the XP partition without having to reboot and hence the attempt. I am surprised with the perceived slowness since the whole world seems to be doing virtualization and I cannot imagine everyone works on slow systems knowingly. My question is - is there something I should be doing to boost performance? Am I doing something wrong? This is my home machine and I am not sure if this is the right forum to ask. Thanks.

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • Library conflict in Mac OS X

    - by Juan Medín
    I was trying to install the ImageMagick library on Mac OS X Snow Leopard, and first I tried port and, after it failed, homebrew. It updated some dependencies and installed ImageMagick without problems. So far so good. The problem came when I ran Apache. I got the following error in the system log: 07/04/11 12:55:15 org.apache.httpd[41841] httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /opt/local/apache2/modules/libphp5.so into server: dlopen(/opt/local/apache2/modules/libphp5.so, 10): Library not loaded: /opt/local/lib/libpng12.0.dylib\n Referenced from: /opt/local/apache2/modules/libphp5.so\n Reason: image not found I checked the /opt/local/lib and surprise! I don't have the libpng12.0 but the libpng14.0. So, as far as I can tell, something went wrong installing the ImageMagick library. Now, I can't find a way to rollback to the previous libraries, other than copying them from the backup. Do you know if is there a way to recover the previous state or reinstall Apache? Or is this just a corrupt state and I must reinstall OS X?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Apache using 100% CPU, once again

    - by CBenni
    Recently, apache2 started using 100% of CPU power: top gives me From other, similar threads, I took the tip to use mod_status. Aside from HUGE amounts of NULL requests, it gives: CPU Usage: u2.16 s1.32 cu0 cs0 - .0835% CPU load 1.2 requests/sec - 17.6 kB/second - 14.6 kB/request 8 requests currently being processed, 42 idle workers The access and error logs do not show anything surprising or intriguing at all. Note the .8% CPU usage. Another tip was to use strace: root@server:~# strace -p 1956 Process 1956 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...> And remains like this for at least half an hour, without producing any additional output. Restarting apache fixed the problem for less than a second The server runs a few custom python scripts aswell as a django-powered website on apache2 (up-to-date), but even turning the scripts off (or not having them active in the first place) did not change anything. After I stopped apache and powered my server off, powered it on a few minutes afterwards and restarted all my services, the CPU usage remained low for several hours, just in order to pop up again randomly (?) The DigitalOcean CPU stats on my server are: You can see how the CPU usage was super high for almost half a day until I restarted the bot - just to remain stable for several hours and then pop up again. I am completely at a loss of words and don't know what I could do to find out what piece of my code is giving me these problems or if apache itself is the cause... Therefore I would greatly appreciate any hints to the questions: What else can I try to do? Which things might I not have checked? Is this definitely in my own code? How do you find what part of python code crashes an app via a infinite loop or similar?

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • Windows 7 boot animation slows down startup by default?

    - by kngofwrld
    I just upgraded my HDD to an SSD drive. I am running a completely fresh install and enjoy the short boot time. I tweaked the startup to be as fast as I could by removing unneeded apps and such. Nor am I running a solid desktop background (which causes a 30-sec startup delay). I have a 2.1ghz 64 bit laptop with 4 gigs of ram, so it's not a liquid-cooled speed monster, but I checked some super high end PC boot vids on YouTube and noticed that they startup in almost the same time as my machine. I also noticed that the glowing Windows 7 animation plays all the way no matter how fast the PC is. I turned off the animation, and the startup time is unchanged. I turned on verbose startup info and noticed that it runs until the very end, where it looks like it just sits there for no reason waiting for something to happen for a few seconds. So now I think that the Windows 7 startup animation has a timer built into it that forces the computer to wait for no other reason than to play the full animation. Super-fast XP boot vids on YouTube seem to start much faster (and not just because they "have less to load"). Am I imagining things? My question is: How can I turn off not just the animation, but the timer for the animation. Here is a vid that tipped me off, I have no relation to the poster. (warning: soundtrack might be loud) http://www.youtube.com/watch?v=T5LkX3xejJ4

    Read the article

  • WD Caviar Green Extremely Slow

    - by Steven
    I am encountering a really weird problem on my WD Caviar Green HDD. Well first of all I have 2 HDDs on my Desktop, one 160GB Seagate holding my Win7 Ultimate x64 and the problematic one, WD 1.5 Caviar Green for storage purpose. My problem is kinda weird, when I transfer files from my Seagate(C:) to my WD (D:) the speed is good (50-60MB/s). Then the problem arises when I transfer too "many" large files, the transfer speed would go straight down to kilobytes/s. Well after I cancelled the transfer and access my D:, even entering a folder requires loading for like 10 seconds. Such problem not only arises when I am transferring files to my D:, it seems like my WD can't handle much activities. For instance, last time I installed my game on D: and I would face much lag after playing for some time. When the same game is installed on C: no problem arises. Does anyone knows what is the problem? P/S: There was one temporary solution that I used to tried. After the "situation" occurs, I tried to access as many folders on D: as I can and let it load, repeating such actions and giving it some time bring the D: back to speedy transfer. However, large transfers would causes the situation to happen again. Does it have something to do with cache whatsoever?

    Read the article

  • I want to dual boot Windows 8 on a Macbook Pro that doesn't already have Windows. Do I have to buy Windows twice?

    - by Cam Jackson
    My girlfriend just bought a Macbook Pro, and she wants to to dual boot OSX with Windows. Specifically, she would like to use Windows 8. What I already know is the following: Windows 8 discs are only meant for upgrading from previous versions of Windows Windows 8 discs can be used to do a clean install, but (officially) only if there's already a legit version of Windows on the hard disk I've read somewhere of a disc being used to install Windows 8 on a fresh, out-of-the-box hard drive, and it all went well until the activation phase, where it said that the disc could only be used for upgrades The logical conclusion would be that in my circumstance, the only option is to buy a full (non-upgrade) retail copy of Windows 7, install that using boot camp, then load up Windows 7, insert the Windows 8 upgrade disc and do the 7-8 upgrade. However, I've read quite a few blog posts of people installing Windows 8 using bootcamp (e.g., Ars Technica, which leads me to believe that it might be possible to do so without installing Win7 first. The problem is that I'm not sure if these people were using preview versions, which obviously won't have the license issues down the track. Can anyone provide a definitive answer as to how to put Win8 on a Mac?

    Read the article

  • Terminal is not letting me make commands unless I hit enter a bunch of times

    - by ninja08
    Whenever I open terminal it normally allows me to immediately begin making commands. Only earlier today I did the setup for github here https://help.github.com/articles/set-up-git And then all of a sudden the thing where I give terminal commands won't allow me to give it commands unless I hit enter a few times. This is what it looks like: Last login: Fri Nov 9 11:43:28 on ttys001 mysql.save: Permission denied mysql.save: Permission denied /Users/Nick/.zshrc:32: command not found:  . ~ git: ? ~ git: ? ~ git: ? See the big space? That's because it simply will never show the ~ git: thing unless I hit enter 3-4 times. Also, it never used to say ~ git: before I did the git setup. I'm not sure what I changed. I've checked the zshrc file and commented everything out to find the line causing the problem. I've done that and it turns out it was the source $ZSH/oh-my-zsh.sh Within the oh-my-zsh.sh file I've commented out each block of code for the file starting at the top and I've found that this block is causing it: # Load the theme if [ "$ZSH_THEME" = "random" ] then themes=($ZSH/themes/*zsh-theme) N=${#themes[@]} ((N=(RANDOM%N)+1)) RANDOM_THEME=${themes[$N]} source "$RANDOM_THEME" echo "[oh-my-zsh] Random theme '$RANDOM_THEME' loaded..." else if [ ! "$ZSH_THEME" = "" ] then if [ -f "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" ] then source "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" else source "$ZSH/themes/$ZSH_THEME.zsh-theme" fi fi fi

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • WIN7 constant BSOD 0x7B on boot, not producing any dump files where to go from here?

    - by prayingpantis
    So my one win 7 pc has been getting a BSOD on boot (roughly a sec after load screen starts) after a power failure. The complete stop code is 0x0000007B (0x80786B58, 0xC0000034,0x00000000,0x00000000) I've searched for quite a while now on the net and it seems like most people gave up after gettting 0x7B and no dump files. What I've tried so far: startup repair - reports it cannot repair computer automatically. BadPatch is reported somewhere in a problem signature contained in the problem details. startup repair with a WIN 7 CD - also fails, I can't recall what the error was, but it was not the same as the error produced with the start up tool shipped with the version of WIN 7 installed on my machine (I think the text had something ACL-ish contained in it) used a boot disk (Hiren's boot iso) - I used it to enable the CrashDump registry key and then after BSOD, read the HDD's dump locations but it was empty. Note, I'm quite sure the registry keys I edited are the correct ones, since the reboot on BSOD option was enabled by default and after I changed the regkey controlling this functionalitty to 0 the BSOD stayed after I booted again. check disk - works and returns no problems, also it seems I'm able to access all my files on the HDD. mem test - works and returns no errors So I'm not sure what else I can do to figure out what is the problem here. I read somewhere that you can use WINDBG to remote debug another PC, but I'm not sure if this is possible since the OS isn't even loaded yet? Also the last driver change I made on the system was installing a video driver, but I had no problems with it and were able to reboot several times until the power outage happened and the BSOD appeared. Any help or guidance for a way to DEBUG this problem would really be appreciated (I'm not really that keen to try a whole bunch of random fixes, I'd rather try and narrow down the problem first).

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • Website hosted on my virtualbox web server not displaying images or applying css when viewed through phone

    - by WebweaverD
    I would really appreciate it if someone could help me. Please let me know if you need more info in the comments. My Set Up I have a windows 7 pc. On it I run a virtual box VM with a ubuntu 12 guest os and LAMP setup. I share files between the two machines using samba from linux to windows and using windows file sharing (Workgroup) the other way round. The vm is set up with a bridged network adapter and can happily serve web pages to my host machine. I use DHCP reservations on my home wireless router/modem to reserve an ip for the vm and give it a sitename.dev in my windows host file so I can access it at sitename.dev through the browser. The Problem So far so good but I have a dev project which needs a lot of mobile template development, now obviously I can use a browser plugin to simulate a mobile device but I would like to be able to see the real thing easily on my phone during development. So ideally I would like a similar setup on my iphone to my windows setup Now I'm not great on networking and dont have much experience with web server set up. So when I typed the ip of my virtual box into my iphone i wasnt expecting to see anything. I was pleasantly surprised when my site loaded up. The javascript even seems to be running but the images and css are not happening. My Question 1) What is happening here, is it something to do with the bridged set up on the vm network? 2)How do I make the sites load properly through my phone Notes I've also tried another phone. The same sites viewed on live servers work fine.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • Updating ASUS BIOS on Windows 7 64bit

    - by joesavage
    I've recently decided that I want to try and utilize my two graphics cards so that I can have a dual monitor setup. Unfortunately Windows only seems to notice my most recent graphics card installation - and so I've been told that I should look into my BIOS and try to enable two graphics cards. I could not find this setting anywhere in my M2N68-AM Plus v0210 BIOS. After some further research I figured that I should perhaps upgrade my BIOS, so I searched and managed to download the latest version (v1804) as a ROM file. However I am having difficulty figuring out how to install it. I've tried using the Asus EZFlash feature built into my BIOS, but when trying to load up a variety of different ROMs that are for my motherboard/BIOS I get the error: Boot block in file is not valid! I'm not totally sure what I should do to fix this, so I'm looking into other methods of upgrading my BIOS - however I can't really find any solutions that seem to work. Asus Update is for 32-bit only, AFUDOS doesn't appear to work on my Windows 7 64-bit system (I think it's supposed to run in DOS or something - but that just sounds confusing since I know nothing about DOS). Could anybody help me with this?

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >