Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 553/715 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • How can I get (g)Vim to display the character count of the current file?

    - by OwenP
    I like to write tutorials and articles for a programming forum I frequent. This forum has a character limit per post. I've used Notepad++ in the past to write posts and it keeps a live character count in the status bar. I'm starting to use gVim more and I really don't want to go back to Notepad++ at this point, but it is very useful to have this character count. If I go over the count, I usually end up pasting the post into Notepad++ so I can see when I've trimmed enough to get by the limit. I've seen suggestions that :set ruler would help, but this only gives the character count via the current column index on the current line. This would be great if I didn't use paragraph breaks, but I'm sure you'd agree that reading several thousand characters in one paragraph is not comfortable. I read the help and thought that rulerformat would work, but after looking over the statusline format it uses I didn't see anything that gives a character count for the current buffer. I've seen that there are plugins that add this, but I'm still dipping my toes into gVim and I'm not sure I want to load random plugins before I understand what they do. I'd prefer to use something built in to vim, but if it doesn't exist it doesn't exist. What should I do to accomplish my goal? If it involves a plugin, do you use it and how well does it work?

    Read the article

  • Ubuntu loading stops ?

    - by joxnas
    I don't know why, or how... but my ubuntu's loading / booting stops right after the ubuntu logo appearing.. An underscore appears in the right top of the screen: _ then, it disappears leaving the whole screen black Version is 9.10 ,Kernel is .20 I have tryed recovering mode and selected recover damaged packages option, but it didn't do any good.. I have very important files in Ubuntu that I need to copy to a pen drive or something, today.. The terminal mode is working, so i think i can do this there.. My questions: How can I get my Ubuntu to load? Is it possible to copy the files i need via terminal mode? I dont know if with other of the previous kernels it would work... but I had configured my grub (inside Ubuntu's gui) to show only the last kernel... and now i can't select any other kernel then .20 because I don't know a way to configure Grub unless via Ubuntu's gui... My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core duo 4Gb DDR2

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Getting windows virtio mounted/installed for KVM

    - by Swifty
    There might be an easy answer to this. I have exhausted my search on google for a solution. Here's my problem. I need to get Windows working on a KVM vps with virtualizor CP. As I get into windows installation in VNC, there's the mandatory driver installation requirement, as HDD is in virtio. There seems to be 2 solutions: 1. Mount the virtio iso (http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/) in the CD drive by unmounting Windows ISO and proceed with driver installation. 2. Create a secondary CD drive and mount the virtio iso there. Well, 1st step never seems to work. If I unload the Windows iso and load the virtio iso, it never reflects back in the VNC. Second step I have yet to be successfull. I try to create a second IDE CD ROM drive via virt-manager but the virtio (virtio-win-0.1-30.iso) iso is never listed in there, whereas i specially placed it in /var/lib/libvirt/images folder. Any suggestions on where I screwed up?

    Read the article

  • I really need help resolving a Window Vista BSOD (Blue Screen Crash) on my desktop

    - by anonymous
    Hi, thanks for taking the time to read this. I'll get straight to the details. My desktop is on the fritz; it keeps going to blue screen with the stop message of 0x0000007E immediately after the loading bar of vista, right before transitioning to the account selection screen. My desktop runs on a dual-core 32-bit processor with windows Vista Home(?) installed. I have 3 GB of ram as two separate modules, a 1GB acer module and a 2GB geil module. I have an ati video card, unfortunately I cannot recall the exact name but the chipset is ATI and the manufacturer is Sapphire and the card is on the lower end. My hard drive is 320GB (i think) partitioned into two. The C:\ partition is red lined, while the D:\ partition is still pretty empty. As per the advice of my friend, i tried restarting the system with the graphics card removed. Upon failure, i repeated the process removing one RAM module one at a time, but the system still failed to load. Vista would attempt to repair the system and it would initially report that the system was fixed, but vista really failed to fix the problem. After removing the memory modules, vista started to report it's inability to fix the problem. I tried running on safe mode and the driver listing would always stop at crcdisk.sys. I ran memory diagnostics using the windows memory diagnostic tool found in the screen after vista's failed attempt to fix the problem with no luck. the problem details are as such: Problem Event Name : StartupRepairV2 Problem Signature 01: AutoFailover 02: (vista's version number?) 03: 6 04: 720907 05: 0x7e 06: 0x7e 07: 0 08: 2 09: WrpRepair 10: 0 OS Version: 6.0.6000.2.0.0.256.1 Locale ID 1033 any correct advice would be appreciated as i really need my pc to work so i can work on my projects. kinda sad, but i'm college of computer science and i have no idea what to do :P

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • Does cloud computing offer this? [closed]

    - by TheBlackBenzKid
    I have some newb questions I want answering please about cloud hosting - we are currently looking at Rackspace and getting a windows box. This is the situation: We have 15 computers in our office. We have 3 printers, some wifi and some network plugged. We have a standard router and the office share things via dropbox. The computers are not on Windows SBS or something similar. We want a cloud hosting solution that will offer User can login on any machine in the office and see the machine software User can login on any machine in the office and open Outlook and their emails and signature will be on exchange automatically A shared company folder on the network All printers automatically installed on the network Users can login remotely to access emails via the web At the moment we have a network company saying we need Xeon server in house with backup and psu and Windows SBS with license for each machine and also we need cabinets and cabling setup and also load balancers and modification of our DNS for emails. My question is this. Can cloud offer this? Can we have a server in the cloud that does this? Is it possible I mean the computers would be wireless connected to this cloud and you turn the machine on and its hosted?

    Read the article

  • Different behaviour when running exe from command prompt than in windows

    - by valoukh
    Forgive me if I'm not using the correct terminology here but I'll try to explain the issue I'm having as best I can! We use a program that allows you to view video footage from several cameras, and it works in such a way that when it is opened it then automatically loads several video files within its interface. These video files are stored in a subfolder, e.g. "videos\video1.asf", etc. I didn't create the program so I can't say what method it uses to open the files. The files are stored on a network server and are being accessed via a share/UNC path. When the file is run from windows explorer (by navigating to the network share and double-clicking the exe), it works perfectly. When the file is run via the (elevated) command prompt (e.g. by typing the \server\path\to\file.exe) it opens, but the corresponding video files do not load. I am trying to create a script that launches the program via the command prompt, so the first step is finding out why the two actions above have different consequences. Any advice on how running executables from the command prompt could produce a different result would be greatly appreciated. Thanks

    Read the article

  • Powershell Script to help archive company email

    - by sec_goat
    I am attempting to use powershell to gather emails that pertain to a certain subject, so that these correspondences might be turned over to a legal department. I am having a couple of issues here that I would like some assistance getting past. I run the following command: get-mailbox -Database "Mailbox Database" | Export-Mailbox -ContentKeywords "Keywords To Search" -TargetMailbox "sec_goat" -TargetFolder EmailSearch -StartDate "01/13/2011 12:01:00 This has pretty much done what I want, and returned a boat load of emails, however it has also flooded my inbox with hundreds of blank calendars and contact lists. I realize now I should have used the exclusion on these folders, as well as a test environment (which we don't have). 1.How can I clean up this script to not include all the blank folders, contacts and calendars that DO NOT match the keywords search? 2.How do I clean up hundreds of blank contact lists and calendars in my mailbox without right clicking and deleting each one? EDIT: I edited the post to change the scope of the question. I think my focus is less on the legal perspective and more on the "How can I clean up my mess and make future archives less messy and painful?"

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • How many iptables block rules is too many

    - by mhost
    We have a server with a Quad-Core AMD Opteron Processor 2378. It acts as our firewall for several servers. I've been asked to block all IPs from China. In a separate network, we have some small VPS machines (256MB and 512MB). I've been asked to block china on those VPS's as well. I've looked online and found lists which requires 4500 block rules. My question is will putting in all 4500 rules be a problem? I know iptables can handle far more rules than that, what I am concerned about is since these are blocks that I don't want to have access to any port, I need to put these rules before any allow. This means all legitimate traffic needs to be compared to all those rules before getting through. Will the traffic be noticeably slower after implementing this? Will those small VPS's be able to handle processing that many rules for every new packet (I'll put an established allow before the blocks)? My question is not How many rules can iptables support?, its about the effect that these rules will have on load and speed. Thanks.

    Read the article

  • SATA disks boot order?

    - by kire
    I have a computer with 4 SATA disks connected, 2 to the motherboard and 2 to a PCI-card. I've moved all the disks from other computers so 2 of them already has windows installed, the other 2 has just had data on them. I know one of the windows installations works and i want to use this install. If i only have this disk connected the computer boots fine. The problem is that when i boot with all 4 disks connected i get an error message about not being able to boot. This message is in swedish(the wrong windows installation is swedish) so it has to come from that installation. Ok, that means it is trying to boot from wrong disk, i try to pull out that disk but then i get the same error in english. I pull out the fourth disk instead I get another error about NTLDR not being able to load. If i disconnect all drives except the one with the correct windows installation, windows boots fine and i can also connect the other drives while windows is running and browse around in them without any problems. I have no clue what to do. In my BIOS setup i can only select SATA as boot-option, not the order of the disks. I also tried to remove what's left of windows on the other disk by simply deleting everything in explorer(got hidden- and sytstem-files visible). Both windows installations are XP btw. Edit: I switched the cables around and now it magically works. :)

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • Python 2.7 and Ubuntu 10.10: X11 fails

    - by c.a.p.
    Against every recommendation I have apt-get remove python in Ubuntu 10.10 (build 2.6.24-29-server x86_64) and apt-get install python2.7 Some built-in software with python2.6 dependencies (like firefox, other minor stuff) was fixed just by apt-get reinstalling or reinstalling from source and the system was stable for one day until I rebooted. Upon booting I got the message "ubuntu is running in low-graphics mode" I have an NVIDIA Quadro NVS 295 256 running on a HP Z600 Workstation so I sudoed: apt-get --purged nvidia* apt-get --purged xserver apt-get install linux-headers-generic apt-get install nvidia-* apt-get install xserver-xorg dpkg-reconfigure xserver-xorg nvidia-xconfig upon rebooting i get the same "ubuntu is running in low-graphics mode" if I decide to tell the boot menu to restart X, the Ubuntu 10.10 "load screen" shows up and does not do anything for hours. If I "X" the screen remains black for hours; ctrl-c shows that /etc/X11/xorg.conf did not produce any errors just a warning "Type "ONE_LEVEL"...". /var/log/Xorg.0.log issues the following warnings (no errors) (WW) AllowEmptyInput is on (WW) NVIDIA(0): UBB is incompatible If I "startX" I get /usr/bin/python: can't find '__main__.py"' in '/usr/share/command-not-found' Before re-installing xserver-xorg however "startX" ran, just by changing the resolution of the tty1 console. Any hints?

    Read the article

  • Profile not loaded correctly (Cannot access registry)

    - by xaav
    Every so often, I log on and get the Following Message: User profile was not loaded correctly. You have been logged on with a temporary profile. Changes you make to this profile will be lost when you log off. Please see the event log for details or contact your administrator This almost always happens when somebody else has been on the computer for a while, and then I log on. This never used to happen, but now it happens pretty often. My profile is not permanently corrupted, all I have to do is restart my computer, but this annoys me, and I would like to fix it. I was curios about the reason of this cause, so I looked into the Event Log, and found the root of the problem was the ntuser.dat file in the profile that I was logging on to was locked at logon time. This resulted in the current users registry not being loaded, resulting in failure to load the profile. I just found a microsoft article that mentions this exact issue: http://support.microsoft.com/kb/960464/ The problem is that I do not want to delete this profile; and this issue does not come up every time that I log on, only when somebody else has been on a long time before me. What could be locking this file? Is there any way to get a process list without logging on so that I can identify which process has the file locked? Any other suggestions?

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >