Search Results

Search found 7671 results on 307 pages for 'slow browsing'.

Page 244/307 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • PowerPoint '10 avoid animation completion on click & advance slide or start new one

    - by ScottS
    Scenario I have PowerPoint 2010 On the "Transitions" tab the "Advance Slide On Mouse Click" check box is checked. I have a long, slow, timed, non-repeating animation working in the background of the slide. I click to advance the slide before the animation is finished, but ... Instead of advancing the slide, the animation moves to the completed state ... Forcing a second click to actually advance the slide. Additionally If I have other animations on the slide that are initiated by a click, the long animation also advances to a finished state before starting the new animation. Desired Behavior On click, I want the slide to advance or the next on-click animation to start whether the long animation is done or not, and without having that long animation first "complete" itself. In the case of another animation, I simply want the long animation to continue, while also doing the new animation. Ultimate Question Is there a way to either: Set an option somewhere to not have that animation complete on click and simply "continue" to animate with the start of a new animation or to advance the slide (as the case may be)? Create a VBA script that will produce the desired behavior for the long animation?

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • Virtual Machines Renaming/Backing up/Sharing

    - by evan
    I've started using VMware virtual machines for all of my software development projects and have a few questions for others doing the same thing. First, how can you rename the virtual machine and the name of the virtual hard drive? I have a base development machine that I clone for different projects. I'd like to name the machine and it's hard drive according the the project (right now when I copy them via cut and paste, the file names remain the same and I can only organize them by putting them in a specific directory). Second, what is the best way to back up a virtual machine? Is it possible (by breaking the virtual hard drive up into chunks instead of one big file) to get incremental backups working? It seems time machine always tries to make a copy of the whole thing which is time consuming because each virtual machine is around 30GB. Finally, how slow would it be to have a virtual machine shared on an NFS mount on a wireless N network and used from multiple computers (but with only one person using it at a time.) Would it be more reasonable on a gigabit lan connection? Thanks for your input! And please feel free to share any advice or wisdom about using virtual machines for software development and the best ways to speed them up!

    Read the article

  • big speed difference on a network link with and without VPN tunnel

    - by xirtyllo
    Scenario: We have a network link between two offices. The link is provided by a third party company through a VLAN on their network, but to us it is totally transparent -as if we had a simple ethernet cable going from one location to the other-. We have one router at each side of the link, with 3 VPN tunnels in between the two. The test: When I test the speed of the network link with the routers in place, with one laptop directly connected to the router on each side, I consistently get ~30/35Mbps. But if I take out the routers and I test the link connecting the laptops directly to the ethernet cable at each side, I consistently get ~85/88Mbps. It's quite a big performance hit, and I would tend to think that the VPN tunnels are responsible for the slow down. Is it normal that this configuration (two routers with three VPN tunnels between them) takes away so much bandwidth? More info: The encryption algorithm used for the VPN tunnels is AES128. The routers model is Zyxel USG200 and Zyxel USG1000, and their CPU, memory, and storage use is well within normal limits. The nominal bandwidth of the network link is 100Mbps. The network link in question is supplied by a third party company (the building in between our two offices). Basically it passes through their network as a VLAN, but the VLAN is completely transparent to us (e.g. no configuration required on our side, just like one single cable from end to end). Unfortunately (or maybe fortunately) I cannot directly test different routers configurations as I'm not the person in charge of it.

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • Running Mathematica-5 remotely

    - by oxinabox.ucc.asn.au
    I have Mathematica 5 - a powerful CAS. I have a cheap netbook (running Windows XP), wich not only is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose Linuxes, but one of which is Windows Server 2008, though I'ld rather not use this one*). Mostly over SSH but other protocols can be arraged for some, I'm sure. So I'ld like to install Mathematica onto one of these machine and then run it remotely. Either from the command line via Putty or via some other method. I glanced through the mathematical documentation and read something about using some MathLink program, which links the front end installed on my computer to a remote kernel. Anyone have any experience with this? I'm not sure if this belongs here or in SuperUser. At the moment, it's being tinkered with, and when the tinkering stops it'll likely be used to run multiple thin terms. As compared to the Linux machines: I have access to a dual 2.4 Xeon with 3GB RAM, which the rest of the world seems to have completely forgotten about (runs freeBSD!).

    Read the article

  • Low CPU performance with low usage and clock - Windows 8.1

    - by Daniele
    I recently deleted everything from my PC and reinstalled Windows 8.1 from scratch. When I first booted into Windows everything was extremely slow though the CPU usage was very low (about 1%). After installing some drivers the problem seemed to be solved, I was able to use my PC normally. Today I installed a game and I noticed a strange behavior: the game was playable but the performance worsened more and more in the time. This is the situation BEFORE opening the game (normal): This is AFTER some minutes inside the game (low CPU usage and clock): Some information about my system: PC: Sony Vaio S13 (SVS13A1C5E) OS: Windows 8.1 CPU: Intel Core i7-3520M 2.90GHz GPU(1): Intel HD Graphics 4000 GPU(2): NVIDIA GeForce GT 640M LE I tried searching for new drivers and other solutions but noting worked and I don't know what is the cause. I did not checked the temperatures but the fans are not running fast and the PC does not look overheated. Update: Max CPU Temp: 66°C, Max GPU Temp: 61°C The strange thing is that the GPU load is 99% (GPU-Z) and the fan is almost silent. Update 2: I had troubles with Sony Vaio software, I can't get the FN keys and the STAMINA/SPEED switch to work (it is a physical switch to enable/disable the Nvidia card and change the Power Profile). I'm saying this because I remember that before reinstalling Windows there was an option in the Vaio Control Center (now it is not there anymore) that allowed me to choose from something like "priority to performance (ventilation)" or "priority to silence". The current behavior looks like a "priority to silence", but I can't get the stamina-speed switch to work and so I don't see similar oprions in the Vaio Control Center. I don't know if the problem is related to this.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • Weird routing issue

    - by Joel Coel
    I'm having some weird internet problems on campus. I know it's something simple, but it's a case where I need another set of eyes. I think I can explain the problem best by posting a tracert: Tracing route to google.com [74.125.45.147] over a maximum of 30 hops: 1 3 ms 3 ms 3 ms 192.168.8.1 2 1 ms 1 ms 1 ms elissaemily-pc.york.edu [192.168.10.5] 3 2 ms 2 ms 2 ms rrcs-76-79-19-33.west.biz.rr.com [76.79.19.33] 4 31 ms 3 ms 2 ms ge-1-1-0.lnclne00-mx41.neb.rr.com [76.85.220.109] 5 20 ms 17 ms 17 ms ge-7-3-0.chcgill3-rtr1.kc.rr.com [76.85.220.137] 6 20 ms 20 ms 19 ms ae-5-0.cr0.chi30.tbone.rr.com [66.109.6.112] 7 19 ms 19 ms 24 ms ae-1-0.pr0.chi10.tbone.rr.com [66.109.6.155] 8 26 ms 24 ms 24 ms 74.125.48.109 9 23 ms 24 ms 21 ms 216.239.46.246 10 39 ms 39 ms 55 ms 209.85.242.215 11 39 ms 39 ms 39 ms 209.85.254.243 12 39 ms 40 ms 96 ms 209.85.253.145 13 39 ms 39 ms 39 ms yx-in-f147.1e100.net [74.125.45.147] Trace complete. Note the second entry in there. Not only is the host name a student's computer, but the ip address doesn't exist. Dhcp shows that host as having a different address and you can't ping any 192.168.10.5. Yet somehow it's routing packets for us (and not very well, either — things are slow right now). The basic network routing table looks like this: Destination Subnet Mask Gateway --------------------------------------- Default Route -- 10.1.1.5 (our firewall) 10.0.0.0 255.0.0.0 -- 192.168.8.0 255.255.252.0 --

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • Which apache/mysql/php package is best for windows?

    - by crosenblum
    I have tried appservnetwork, was the best so far, but I haven't seen them do an update in ages, EasyPHP is just slow to load always. Wamp and Xamp, all put in their description that is not for production. I do not plan to host publicly this site or site's I am working on. But I do want a fast loading apache/mysql/php server for development purposes. I used to really like WLMP, which is Lighttpd for Windows, but that project seems unupdated or abandoned. I refuse to use IIS, but i have no desire to get into any wars over it. I run windows xp sp3 at my home pc. I will need to have a web server setup for professional work, as well as some fun websites I am working on. I just want it fast enough, so i can run it via localhost, and not take forever to load in the browser. Thank you... I plan mostly do php programming and perhaps coldfusion via this.

    Read the article

  • Is there any way to abstract IP address during ssh?

    - by Vivek V K
    I have a server which is in the middle of a forest. It is connected to the Internet via a microwave link and an ADSL link.Hence it has two different static IP addresses. Now if there is heavy rain, the microwave link breaks and I should use the much slower ADSL link. And I ping the microwave ip time to time to check if it is up again . But at times, I end up using the very slow ADSL link even if the microwave link is back up. Hence I need a way to automate this in the following way. 1.I need to abstract the IP address of the machine with some other name which when I use ssh or sftp, will poll both the IP and connect me to the best one. so for eg: if I say ssh -Y name@server, It should first try to connect to the microwave link if it cant, then connect to ADSL. 2.Suppose the first time I connect, the microwave link is down so it connects to ADSL, I need it to dynamically change to the microwave link once it is working again. Is this even possible?

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • What open source ecommerce webshops offer #1: usability, #2: PayPal integration, and #3: ease of administration and use

    - by Jonathan Hayward
    I've spent several days trying to deploy Satchmo, in the process asking several questions about deployment (http://stackoverflow.com/questions/11277407/can-anyone-explain-this-error-message-deploying-a-satchmo-project-under-gunicorn, http://stackoverflow.com/questions/11277685/is-there-a-howto-to-fcgi-for-deploying-satchmo, and http://stackoverflow.com/questions/11278295/what-is-the-most-stable-release-of-satchmo). Django's tagline is "The web framework for perfectionists with deadlines," and Satchmo's tagline is equally forceful: "The webshop for perfectionists with deadlines." I'm looking more to set up, configure, design, etc., rather than code for this one, and I'm taking a bit of a hint that for me at least the "with deadlines" bit is something that I cannot manage. Deployment has been a time sink. So, taking a step back, I don't specifically need to edit and extend the source code; what I want are first, good usability and a clean experience for the end-user, then being easy to deploy/install/manage/maintain, and enough so that even if you're having a slow day it should at most be one day's work to install, one day's work to get running, and one day's work to rebrand as white label (for simple branding). What ecommerce webshops should I be looking at?

    Read the article

  • Outlook 2010, 2007 Sync problems after migration from SMTP to Exchange

    - by kirgy
    Our organization recently switched from an SMTP server to an Exchange server, since then several user's Outlook's are not synchronizing their emails as expected with the Exchange server. Our move over from an SMTP server to an Exchange server consisted of adding the new Exchange account alongside the existing SMTP account, drag-dropping/copy-pasting folders client-side from the SMTP account in the folder pane in outlook, to the newly created Exchange account. The problem happens when a user moves an email to a folder from their inbox or another folder. At this point the email disappears from Outlook client side. Re-syncing the folder, send/receive, closing/opening outlook and even system reboots do not make this email reappear. The Outlook web interface (OWA) reports the email is in fact in the folder they placed it in, and is not deleted. Doing a "search all mail items" for the emails shows that the email is still there; not deleted nor removed. To add to the confusion, when new folders are created and the email is placed in these folders, the synchronization happens without any issue both client side and server side. As the emails are appearing server side, we are confident to presume this is a client side issue. We have tried adding/removing accounts on one system which resulted in the same issue. This was a very long and slow process due to the sheer volume of emails (20gig+ from most users). We have tried reinstalling outlook restoring accounts from back-ups which has not resolved the issue. We also tried upgrading one system from outlook 2007 to outlook 2010 which, again, did not resolve the issue. We have experienced issues with a lot of emails disappearing during the copy-over process in which I'm not convinced it was the best route of migration, but nonetheless we are where we are. Can anyone suggest potential avenues of solutions to resolve this issue? Thank you. Systems: Windows 7 (10 systems) Windows XP (2 systems) Outlook 2007 (2 systems) Outlook 2010 (7 systems) Problem Outlook systems: Windows XP, Outlook 2007 x 1 Windows 7, Outlook 2007 x1 Windows 7, Outlook 2010 x 2

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine?

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Computer randomly freezes when playing games

    - by TutorialPoint
    My computer just randomly freezes when playing certain games. It has happened to me in Battlefield: Bad Company 2, Call of Duty 4, and Blacklight: Retribution. It has not happened to me with other games like Tribes: Ascend yet, which leads me to believe it is a software-side issue related to maybe DirectX or PhysX? Also, temperatures seem stable. I used RivaTuner combined with MSI Afterburner, and at the time of freezing with BF:BC2, it gives: 62C, 67% GPU usage and 78.8FPS. During the session the max I have seen was 65C and 97% GPU usage. On Blacklight: Retribution, I've heard other people complain about the problem too. This is why it is such a mystery to me, is this actually a driver problem, or more a game problem? I've been able to play these games for long until I re-installed Windows 7 (because it was growing too full and slow). Before I had a 32bit Ultimate version, and now 64bit. Specs: O/S: Windows 7 64bit Ultimate CPU: Intel i5-750 @ Default 2.66 GHz GPU: ASUS EAH5770 1GB PSU: CoolerMaster Real Power M520 (520W) MB: Gigabyte P55M-UD2 Catalyst Control Center version (in "About"): 2012.0214.2218.39913

    Read the article

  • Exporting Client Data from Groupwise 6.5 to Outlook 2010 without Crashing

    - by Adam Doherty
    My employer has recently moved from Novell GroupWise 6.5 to Exchange 2010. We've imposed mailbox limits on staff but we still need to move their old messages, contacts, calendars, etc. over to Outlook 2010. Our problem however is this, utilizing the Novell MAPI client is slow within Outlook 2010 and upon exporting messages to a PST file (for later re-attachment, and offline backup purposes) crashes the GroupWise server. Connecting to the server in Outlook via IMAP to export messages to PST is faster and apparently more stable but also crashes the server. We'll be keeping our GroupWise server online internally until then end of the year but I have staff with mailboxes approaching 12 gigabytes, which is fine if we're going to move the data to offline storage (DVD set) but if I keep crashing the server every time I try to get the data I'll just be spinning my wheels. In my first attempts, I tried to move mail for a staff member with 3GB of data. The transfer lasted roughly 8 hours before crashing. I'm wondering if there is an open source solution to my problem. Paid solutions exist but we're a not-for-profit organization and have too many staff to justify the costs of per seat licenses just to migrate mail.

    Read the article

  • How to speed up a HP M9517C

    - by Jen
    I bought a system with 8GB RAM, 1TB HD, Quad-Core AMD Phenom 9550, Nvidia Geforce 9300GE, 64-bit Windows Vista Machine. Bought it primarily because it was cheap and came with 25.5 inch screen. Problem: It's slow - if you can believe it. My Dell laptop 1525 is faster and more stable! I tried installing and dual-booting Linux Mint and ran into video and audio troubles. I need fast and stable and I'm going for awesome. Anyone have some suggestions on making this thing smoking hot? Vista is fine, but slows over time - suspect virus/spyware/etc.. But I need to use Photoshop, Fireworks, Dreamweaver, Illustrator. I've tried the alternatives and I just don't like them. When you've got deadlines looming you want to work with what you know. Also use Skype (and I had audio problems with it in Linux), gotomeeting, gotowebinar. Don't need MS Office. Tried VMWare, Virtualbox and again - I keep getting audio/video problems. I'd love someone's input on THEIR setup and how they got there. I'm sure I need to upgrade my video card, but what should I go to?

    Read the article

  • Windows Remote Desktop (RDP/MSTSC) fails with Error Code: 5

    - by BryCoBat
    I have 2 Windows XP boxen: A (running XP SP3) and B (running XP SP2). I'm using Remote Desktop to connect from A to B. When I connect, I get the login screen (which is slow to respond to keyboard/mouse input), and after logging in, I get the following: Fatal Error (Error Code: 5) Your Remote Desktop session is about to end. This computer might be low on virtual memory. Close your other programs, and then try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. I've seen one way to (sometimes) get in by opening a second RDP session to the same box [1], and if I wait long enough sometimes it will go ahead and log in anyway. Is there something broken/missing on the PC I'm trying to remote in to? Edited in reply to djangofan: There's nobody listed under "Lock pages in memory". When the double login trick works, a glance at Task Manager shows plenty of free memory, 800MB available out of 1.5 GB. (Performance tab, Physical memory) For what it's worth, this happens consistently after a reboot. What sort of exact info would be useful? There's very little remaining installed on that machine that's not Windows + Office... [1] found at http://www.fdcservers.net/vbulletin/archive/index.php/t-1580.html

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • win8: access denied to external USB disk; update access rights fails

    - by Gerard
    I use to work with 2 laptops (vista and win7), my work being files on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take win8. 1/ I suspect something changed with access rights, as my external disk suffered some "access denied" problem on win8. I was prompted (by win8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying "disk is not ready". Additonnally, the usb somehow was not recognized anymore. 2/ Back to win7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them i could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under win7, all the folder showed with a lock. 3/ Then back again to win8. Same problem : access denied to my disk + no way to change access rights as it gets stuck "disk is not ready". Now I am pretty sure there is some kind of bug or inconsistence in win8 / win7. I did 2/ and 3/ a few times. At some point, I also got an access denied in win7. I could restore access rigths to the disk to "system" (properties - security - EDIT for full control to group "system" ...). But then I still get the same access right pb on win8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. Now, after I tried for more than 3 days, I am losing my patience with that bloody win8 which I did not want to buy but had no choice. I upgraded win8 with the windows updates available. Does not help. Anybody can help me ?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >