Search Results

Search found 16174 results on 647 pages for 'disk space'.

Page 538/647 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • COMPAQ Tower No Signal to monitor

    - by Lancelot
    I received a Compaq tower: Compaq Presario SR1224NX Onboard VGA Windows XP SP2 from a friend. My plan was to turn this into an Ubuntu Server. It booted up with no problems even with the Ubuntu live disc. After a normal shutdown (not unplugging the power cord and not doing a hard shutdown with the power button), it would not restart even after SEVERAL attempts. I realized the light next to the power supply would flash very rapidly. I researched and found out it was one of two things: a dead power supply or the cables to the motherboard and to the disks might be faulty, etc. Thus, I checked to ensure the cables were fine(and they were). I purchased a Power Supply (this one has 400 watts, the initial had 250) and installed it. The tower was able to boot into the live disk and everything. After a normal shutdown, it now restarts but is not sending signal to my monitor. I have tried several monitors in which I know work perfectly but not with this tower (I recall that it did show display after replacing the power supply). The monitors are ACER. This is different than most "no Signal" problems since I am not using an external Video Card, this is onboard VGA.

    Read the article

  • ExpressCache not working after Windows 8 reinstall on Samsung Series 7 Gamer

    - by Morven
    I have a Samsung Series 7 Gamer laptop which came with Windows 8. After doing a reinstall of Windows, the ExpressCache software is no longer caching. Running "eccmd -info" shows me that the software is present and it has the MSATA drive partition configured. However, it's not actually caching anything. These are the results after having the system booted for days: C:\windows\system32eccmd -info ExpressCache Command Version 1.0.94.0 Copyright¬ 2010-2012 Condusiv Technologies. Date Time: 11/3/2013 12:26:20:263 (JAMETHIEL #36) EC Cache Info ================================================== Mounted : Yes Partition Size : 7.46 GB Reserved Size : 3.00 MB Volume Size : 7.46 GB Total Used Size : 86.50 MB Total Free Space : 7.38 GB Used Data Size : 16.63 MB Used Data Size on Disk : 84.38 MB Tiered Cache Stats ================================================== Memory in use : 32.00 MB Blocks in use : 136 Read Percent : 0.02% Cache Stats ================================================== Cache Volume Drive Number : 1 Total Read Count : 97242 Total Read Size : 4.13 GB Total Cache Read Count : 0 Total Cache Read Size : 595.50 KB Total Write Count : 161546 Total Write Size : 5.89 GB Total Cache Write Count : 0 Total Cache Write Size : 0 Bytes Cache Read Percent : 0.01% Cache Write Percent : 0.00% As you can see on the last two lines, cache read and write percent is nigh on zero. Anyone know where to look next? The only guides I can find deal with ExpressCache not being present or not having a configured drive.

    Read the article

  • IP queue buffer

    - by summerbulb
    I seem to have an issue with IP queue. I have a linux machine that I am using to run some experiments. The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic. All incoming packages are captured, using iptables, and analyzed by a C application. The application analyzing the packets has a built-in delay, as part of the experiment. So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one. This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously. The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it. But, (and this is where it gets interesting), every now and then I get the following error: "Failed to receive netlink message: No buffer space available" At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow. When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow): Peer PID : 27389 Copy mode : 2 Copy range : 65535 Queue length : 0 Queue max. length : 1024 Queue dropped : 1166875 Netlink dropped : 2916 Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting. I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction. Bottom line: Why am I getting this error and what can I do to avoid it?

    Read the article

  • Windows 7 boots to black screen with blinking cursor

    - by murgatroid99
    I have an Alienware M17x that dual boots into Ubuntu 11.04 and Windows 7 Home Premium. Currently, the computer starts at the GRUB loader and will boot into Ubuntu, but if I try to boot into Windows, I immediately get a black screen with a blinking cursor in the upper left corner. The output of fdisk -l is Device Boot Start End Blocks Id System /dev/dm-0p1 1 5 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/dm-0p2 6 1918 15360000 7 HPFS/NTFS Partition 2 does not start on physical sector boundary. /dev/dm-0p3 * 1918 64772 504878877+ 7 HPFS/NTFS Partition 3 does not start on physical sector boundary. /dev/dm-0p4 64772 77827 104858625 5 Extended Partition 4 does not start on physical sector boundary. /dev/dm-0p5 64772 67204 19531008 83 Linux /dev/dm-0p6 67204 74498 58593536 83 Linux /dev/dm-0p7 74498 77577 24731648 83 Linux /dev/dm-0p8 77578 77827 2000128 82 Linux swap / Solaris I have used the Windows rescue CD, and run the automatic error fixer until it finds no errors. I have run chkdsk /R on both the main Windows 7 (/dev/dm-0p3) partition and the recovery partition (/dev/dm-0p2). I set the main Windows 7 partition to be active. I also tried running in the recovery console the commands bootrec /fixmbr bootrec /fixboot bootrec /rebuildbcd None of these helped and the last set of commands deletes grub, which I then have to reinstall from Ubuntu. I think the last thing I did in windows before this started was install the newest ATI driver for my video card. This would suggest using system restore, and I actually had a restore point earlier (after the problem started), but after whatever I did that restore point does not appear in the list on the recovery disk any more, so I cannot do a system restore. Is there anything else I can try to make Windows boot properly again? Edit: Running the suggested commands bootsect /nt60 c: bcdboot c:\windows /s c: was also ineffective.

    Read the article

  • Windows 7, going crazy with environment variables

    - by roymustang86
    So, I am trying to learn java. I installed the JDK and proceeded to write a few programs. Each time, I have to give the path to javac.exe to compile the .java file. SO, I decided to tweak the %PATH% variable. And no matter what I change it to, it doesn't work. when I do an echo %PATH%, I get 'Program' is not recognized as an internal or external command, operable program or batch file. This is my Path variable contents : C:\app\product\11.1.0\client_1\bin;%CommonProgramFiles%\Microsoft Shared\Windows Live;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;"C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\";"C:\Program Files\Broadcom\Broadcom 802.11";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\12.0\DLLShared\";"C:\Program Files (x86)\Roxio\OEM\AudioCore\";"C:\Program Files (x86)\Intel\Services\IPT\" How do I work around this? the double quotes were not there before, I added it thinking the space was the problem.

    Read the article

  • Virtualization in Ubuntu 9.10

    - by Jeff Dege
    I have an existing Centos 5 installation. I would like to upgrade to Ubuntu. Thing is, I don't want to be down for as long as it will take to get my entire environment moved over - software installed, connectivity configured, etc. I'd like to take it one step at a time. But I don't really want to keep rebooting back and forth from the new OS to the old OS. That's what I did last time I upgraded to a new OS, and it got old real fast. So, since my new MB is virtualization-ready (AMD Phenom II 945 quad-core), I figured I could create a virtual machine, under the new OS installation, that ran the old OS installation. The problem is that the documentation I've been able to find has been pretty sparse. I've found a lot of possibilities, and little info on which would be capable of doing what I want. I have a new Ubuntu 9.10 installation, and a second disk containing the Centos 5 installation. And I don't know where to go next. Any help would be appreciated.

    Read the article

  • I have enabled hidden administrator in Win 7 home, but programs still dont work.

    - by Angela
    I have Windows 7 Home Premium, and would like to do some maintenance tasks such as running Disk Defragmenter. However, this and other programs and applications that I'm accustomed to using are now blocked. For these programs, there is a shield icon next to their icons and nothing happens when I click on them. I notice that the screen blinks slightly, but I do not get prompted for a password and the program still does not run. It seems these programs may only be accessible through an Administrator account. However, right-clicking and selecting "Run As Administrator" does not work. After some research, I found a way to enable the hidden built-in Administrator account. I booted the computer into safe mode. In the command prompt, I typed net user administrator /active:yes. I gave the account a password. I rebooted the system. There is now an Administrator account on the home screen. However, the locked programs behave no differently for me when I use this account. What could cause this problem? How can I fix it?

    Read the article

  • Win7 System folder contains infinitely looping SYSTEM(!) directory

    - by Matt
    My Windows 7 Enterprise computer has been crashing fairly frequently recently, so I decided to boot up in safe mode and run the TrendMicro client I have installed. It froze about 10 minutes into the full system scan, so in the spirit of http://whathaveyoutried.com, I started scanning each folder individually. When I got to ProgramData, the AV failed with an uncaught exception. I then went down a level and tried scanning Application Data, which failed as well. Imagine my surprise when I open the folder just to see the same folder again! As far as I can tell, this folder loop continues indefinitely. (If you are trying to recreate this, keep in mind that ProgramData is a hidden folder.) I'm actually a bit concerned that these are system folders, as this is a brand-new computer with a clean installation. I guess I have three questions: Has anyone else seen/experienced this before? I'm running Win7 SP1. How do I fix this? I've run CHKDSK \F with no success (although it was incredibly slow). What are the ramifications of an infinitely recursive directory? Theoretically speaking, each link takes up memory, so shouldn't I have no space available on my hard drive? (I've got about 180GB left.) I noticed that the tree view on the left only shows the "linked folder" icon on the deeper folders--does this mean anything special? (I've circled the icons or lack thereof in red.) How can the OS even resolve this aberration? And above all, what would happen if I were to select "Expand all folders"??? :P Matt

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • Varnish, Nginx, Apache, APC, Meteor, Cpanel & Wordpress On A Single Server, Any Good?

    - by Aahan
    Yes, I have read many close questions, but I needed a specific answer and hence this question. First, these are my new server specifications: Linux Server (CentOS), Intel Xeon 3470 Quad Core (2.93GHz x 4) processor, 4 GB DDR3 Memory, 1TB Hard Disk Space, 10 TB Bandwidth and 9 Dedicated IPs. AIM: To speed up my wordpress blog + Increase server's capacity to handle heavy load PLAN: This is how I am planning to setup my server - - VARNISH (in the front, to cache server responses) NGINX (to effectively handle static content & overcome the C10k problem) APACHE (behind Nginx, to effectively deliver dynamic content) APC (PHP page, database & object caching) CPANEL (which requires Apache, and I require it) WORDPRESS W3 TOTAL CACHE (caching plugin for Wordpress). So , will the setup work? Have anyone tried it? Please shower your thoughts and knowledge. NOTE: I can't do without Apache because I am used to that .htaccess & Cpanel stuff. So, it's not any option. All others are options. Please try to help. I hope I am clear in what I wanted to ask.

    Read the article

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • Can I connect a Playstation 3's HDMI output to my monitor's DVI-D input? [migrated]

    - by HankJDoomstorm
    I'm attempting to connect my Playstation 3 to my computer monitor. The monitor has a DVI-D (dual link) input, so before distinguishing between the different DVI varieties, I bought a DVI-I (dual link) to HDMI converter that won't fit into the port on the monitor (not only that, there isn't enough physical space in the back of the monitor to fit that much stuff before it hits the bottom of it). So I grabbed a DVI-D (single link) cable and got a female-to-female DVI-I coupler, and plugged the DVI-D cable into the monitor and the whole mess of converters. The end result was HDMI to DVI-D single link, but my monitor isn't receiving a signal on its digital channel. (For clarity's sake: DVI-D DL input on Monitor, DVI-D SL cable, DVI-I DL female-to-female coupler, DVI-I DL to HDMI converter, HDMI output on PS3) I don't know much about this stuff (obviously), but my educated guess is that the bandwidth of the PS3 is too high for the DVI-D Single Link cable, so nothing's getting through. Will replacing the single link cable with dual link resolve this? If not, is it possible at all? Oh, I should mention I'm aware I won't get audio through the monitor. I have an RCA to 3.5mm converter for that.

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • Loads of memory in "standby" on Windows Server 2008 R2

    - by Jaap
    In our SharePoint farm, our Web Front End servers all have loads of memory in "standby" mode, meaning very little is available for our IIS worker process. We have 32 GB of RAM in each of the boxes, and standby memory will creep up to about 28 GB, whereas the IIS worker process only seems to be using about 2 GB. Also, we've seen the machine use the swap file extensively while this memory was in standby, so I am starting to think that this memory in standby mode is stopping IIS from using it, forcing it to swap to disk, causing more performance problems. I used SysInternals RamMap to indentify what is being kept in memory, and it was able to tell me that almost everything in standby memory is of type "Mapped File". When I sort the files listed under the file summary tab in RamMap by file size, the largest files (around a few hundred meg each) are IIS log files and SharePoint log files. I would like to understand which process is loading these files into standby memory and why they are not being released. When I do an iisreset, it does not release the memory. Any ideas? Thanks!

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • effective back-up using Raid / Win7 back-up

    - by Job
    I have a stand-alone pc system with two 2 tb harddiscs, one of which configured as Raid1, i.e. mirorring. The operational drive is partitioned. I use an external 1 tb harddisc for back-up using Windows 7 back-up facility which will be swapped weekly and stored on other premises. I back-up all partitions AND allow a system back-up. All application software is on the C: partition. Questions: How can I see whether Raid1 is working; i.e. is doing its job. All I see now is a status message in the start-up procedure that says its status is normal. How can I see used or available space on Raid 1? The Win-7 backup allows for 1 schedule only as far as I can see. I want daily back-ups of data. However due to the single schedule I am forced to do the time-consuming system back-up and c: back-up as well. Is there a way to activate two schedules allowing a frequent (daily) data back-up and a system back-up with c: drive back-up on a say weekly basis? Of course it can be forced by hand but I am likely to forget that. I am not the programming type of person so looking for simple and controllable solutions. Thank you - any help is apreciated.

    Read the article

  • What USB key would you recommend using for running a Windows 7 VM off of?

    - by Darryl Hein
    Because I can't find a good PHP editor for OSX, I develop in Windows with PhpEd. At the moment, my development time is split between a desktop and a laptop. To partially solve the problem of having 2 different environments, I have installed a virtual machine (through Virtual Box) and put the hard drive file on an external hard drive. At the moment, I've been connecting it through Firewire 800. I have 2 problems with this setup: (1) The hard drive is fairly large so to carry the laptop and hard drive I pretty much require a backpack. (2) The hard drive requires quite a bit of power and therefore reduces the battery life (by about 40%). My thought is to move the VM hard drive onto a USB key. I realize it will be slower, but as I'm just using it for PHP development, there isn't a lot of disk activity in the VM. The only really intense time is boot up, otherwise, it just about sits idle. Do anyone have any suggestions on a USB key to use for the VM? It would need to a minimum of 32GB.

    Read the article

  • ESXi 5 Guests will not boot

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. Note: Investigation eventually determined that problem was with the VMware client on a Lenovo Edge laptop, the only Windows box available in a Linux IT shop. vSphere Client v4 and v5 duplicated behavior on the Lenovo Edge. As indicated in the comment to the accepted answer, replacing the workstation with one using different video was the "fix" for this particular issue. The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. No errors in the Events tab. Switching Guest from BIOS to EFI makes no difference. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspicious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space. Edit 4: Guests copied from my old ESXi 4 server DO NOT boot on the ESXi 5 system. Initially it complains about too little Video RAM being configured for the default 2500x1600, but it still doesn't work properly even after I bump the Video RAM settings or switch it to Auto-Detect.

    Read the article

  • Windows Server 2008 - inexplicable system time jumps/glitches/inaccuracies

    - by Nathan Ridley
    I'm running a production web server on Windows Server 2008. On this server I have a database which logs certain user actions, but every now and again I inexplicably get database entries which, according to the record ID and the records immediately before and after, have the wrong time logged against them (7 days+ too old). For example, record ID 1001 will be for Dec 7, 11pm, 1002 will be for Dec 7, 11:01pm, then 1003 will be for Nov 28, 1:38am, then the next will be back on track again. The problem seems to occur in random records (or 2-3 records in a row) and crops up once every few days. This is absolutely baffling because there is only one place in the application that assigns this date/time value and it's simply the system UTC date. I have been synchronizing the system time to time-a.nist.gov (which I read in another article was a bit more reliable than the default time.windows.com) and it seems to occasionally get out of time anyway (3-4 minutes), but I'm speculating that occasionally the time server has a temporary glitch where the date changes to a drastically wrong value for a short space of time, then changes back. Either that, or the motherboard clock battery is screwed and the reason the time momentarily changes is that the motherboard loses the time and then the time synchronization puts it back again. Could either of my suspicions be right? Should I turn off time synchronization for a production server? Assigning dates to an event log where the dates are up to 2 weeks prior to the actual date is a severe problem I can't have when the next version of my application is released. Any suggestions or advice would be appreciated.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >