Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 515/663 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Linking Linux MIT Kerberos with a Windows 2003 Active Directory

    - by Beerdude26
    Greetings, I was wondering how one might link a Linux MIT Kerberos with a Windows 2003 Active Directory to achieve the following: A user, [email protected], attempts to log in at an Apache website, which runs on the same server as the Linux MIT Kerberos. The Apache module first asks the local Linux MIT Kerberos if he knows a user by that name or realm. The MIT Kerberos finds out it isn't responsible for that realm, and forwards the request to the Windows 2003 Active Directory. The Windows 2003 Active Directory replies positively and gives this information to the Linux MIT Kerberos, which in turn tells this to the Apache module, which grants the user access to its files. Here is an image of the situation: http://img179.imageshack.us/img179/5092/linux2k3.png (I'm not allowed to embed images just yet.) The documentation I have read concerning this issue often differ from this problem: Some discuss linking up a MIT Kerberos with an Active Directory to gain access to resources on the Active Directory server; While another uses the link to authenticate Windows users to the MIT Kerberos through the Windows 2003 Active Directory. (My problem is the other way around.) So what my question boils down to, is this: Is it possible to have a Linux MIT Kerberos server pass through requests for a Active Directory realm, and then have it receive the reply and give it to the requesting service? (Although it's not a problem if the requesting service and the Windows 2003 Active Directory communicate directly.) Suggestions and constructive criticism are greatly appreciated. :)

    Read the article

  • ssl_error_rx_record_too_long connection error with SSL site in Firefox

    - by Thomas
    I am trying to set up SSL in Apache but when I go to the server in Firefox I get the following error message: An error occurred during a connection to sludge.home. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My virtual host config file looks like this. <IfDefine SSL> <VirtualHost *:443> ServerName sludge.home SSLEngine on SSLCertificateFile /usr/local/apache/cert/server.crt SSLCertificateKeyFile /usr/local/apache/cert/server.pem SSLProtocol -SSLv2 SSL CipherSuite HIGH:!ADH:!EXP:!MD5:!NULL DocumentRoot "/usr/local/apache/htdocs" ServerAdmin [email protected] <Location /> AuthType Digest AuthName "private area" AuthDigestDomain / AuthDigestProvider file AuthUserFile /usr/local/apache/passwd/digest_pw Require valid-user </Location> <Directory /usr/local/apache/htdocs/bugz> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /usr/local/apache/htdocs/bugzilla> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "/usr/local/apache/htdocs"> Options Indexes FollowSymLinks Allow Override None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> </VirtualHost> </IfDefine> A telnet to this server reveals that server is sending plain HTML back. Further more it seems as though mod_ssl is not loaded/working even though when I httpd -l it shows up as being statically compiled in. I have exhausted most avenues I can think of.

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • Windows Audio Issue

    - by Nikki
    This one is driving me nuts. Hoping someone can shed some light. I'm running windows 7 using onboard audio. It's been fine for over 2 years but lately there's a problem every time I play audio. I hear a small soft burst of static and the volume turns itself down from 50% to 23%. Once at 23%, it plays fine. No related events logged in viewer. No reported problems with the device. Different headphones, same problem. I played around with audio settings for hours but the problem persists. EDIT: ok more info: Motherboard: ECS G31T-M LGA775 System info displays this: Name High Definition Audio Device Manufacturer Microsoft Status OK PNP Device ID HDAUDIO\FUNC_01&VEN_1106&DEV_E721&SUBSYS_10192683&REV_1001\4&3D4E739&0&0001 Driver c:\windows\system32\drivers\hdaudio.sys (6.1.7600.16385, 297.00 KB (304,128 bytes), 14/07/2009 9:51 AM) I'll keep adding info as I find it. The question I want resolved is; Is it faulty hardware? If so, I can buy a sound card. I can't imagine software is responsible since I haven't installed anything new for weeks. Virus scans are clear as well. The static burst is irritating to say the least. Tried 2 different headphones and separate speakers. Same problem. I know it's not an easy problem but I was hoping someone had encountered the same thing.

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • Exchange 2007 restore - Backup Exec Unable to Attach to a resource

    - by Andy
    I have been struggling with this one for months! Grateful for any advice. The setup is a windows 2003 server network, 4xservers on the domain. Two exchange 2007 servers (only one with mailboxes still on). Backup Exec (12.5) on a non-exchange server with agents on the others. Backup exec runs a full backup of exchange across the network well, at pretty reasonable speeds. However, when you try any kind of restore (individual emails, mailboxes or whole system restore - all to same location or to alternate server, RSG etc) the following message is received within about 10-15 secs of starting the job: Job ended: 24 December 2010 at 13:28:32 Completed status: Failed Final error: 0xe000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection Details tab, and then remove the resource. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-33932 Things I have already tried: Changed account to main administrator account (with all permissions) checked versions of ese.dll on both servers - both the same Checked all VSS writers on both servers are stable / normal restoring to different locations Any advice anyone could give would be much appreciated. Many thanks, Andy

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • Throttling Postfix memory

    - by teddybeard
    I have a VPS on 1and1 similar to this configuration (512MB, burst up to 2GB). I run a web service where I crawl the web and notify my users through email and sms when a certain online data feed changes. When I send the emails out, I just have PHP loop through the recipients list and send the emails out using the mail() function. Whenever I try to send a large volume of these messages out, my server starts acting funny. I can't even run an 'ls' sometimes because the shell tells me it 'cannot allocate memory'. The shell is unusable and yet my website is being served up fine. Mail.err contains: Nov 14 17:30:09 s15351477 postfix/smtp[26000]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 17:30:09 s15351477 postfix/sendmail[25999]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Nov 14 18:29:14 s15351477 postfix/smtp[9911]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 18:29:14 s15351477 postfix/sendmail[9910]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Also, if relevant, my bean counters are: Version: 2.5 uid resource held maxheld barrier limit failcnt 53907331: kmemsize 20779422 21041560 31457280 34603008 2989403 lockedpages 0 0 512 512 0 privvmpages 81488 82498 524288 576716 94640 shmpages 2831 2831 32768 32768 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 90 91 128 128 6603 physpages 32692 33531 2147483647 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 32942 33781 9223372036854775807 2147483647 0 numtcpsock 22 23 720 720 0 numflock 27 28 376 413 0 numpty 1 1 32 32 0 numsiginfo 0 1 512 512 0 tcpsndbuf 425888 441064 3440640 5406720 0 tcprcvbuf 369200 376832 3440640 5406720 0 othersockbuf 268000 268464 2252160 4194304 0 dgramrcvbuf 0 8472 524288 576716 0 numothersock 180 182 720 720 0 dcachesize 952146 966231 5242880 5767168 0 numfile 3609 3683 8192 8192 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 25 25 200 205 0 Is there some way I can throttle postfix to keep it from swamping the system like this? Also wondering: why does email use so many resources, these emails are just short text?

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • Deleted, then added user w/ same name, now logs on w/ temp profile

    - by labyrinth
    I am a new admin at a high school lab and am trying to spearhead separation of normal IT accounts from IT admin accounts. I made my normal account (e.g. ITuser) and an admin account (e.g. ITuser-adm) on the server (Win Server 2008 R2). I used both accounts on my my main desktop for about a day, but decided I hadn't set up the admin account correctly. I deleted the my admin account, then made a new one with the same name. The problem is that on my main desktop (Windows 7 Pro), whenever I log in with my admin account, it gives the following errors: Windows has backed up this user profile. Windows will automatically try to use the backup profile the next time this user logs on. (Error 1515) Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off. (Error 1511) This is more of a nuisance than anything for me, I just thought I could use the same name for a user account I'd just deleted since they would have separate SSIDs anyway. If it's less trouble, I could just make a new admin account. Or I could just keep using it as is since I don't need to be saving anything locally anyway and the typical folder redirects work fine. I'm just curious and want to understand what's going on. There are no errors listed regarding the registry.

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • Finding all IP ranges blelonging to a specific ISP

    - by Jim Jim
    I'm having an issue with a certain individual who keeps scraping my site in an aggressive manner; wasting bandwidth and CPU resources. I've already implemented a system which tails my web server access logs, adds each new IP to a database, keeps track of the number of requests made from that IP, and then, if the same IP goes over a certain threshold of requests within a certain time period, it's blocked via iptables. It may sound elaborate, but as far as I know, there exists no pre-made solution designed to limit a certain IP to a certain amount of bandwidth/requests. This works fine for most crawlers, but an extremely persistent individual is getting a new IP from his/her ISP pool each time they're blocked. I would like to block the ISP entirely, but don't know how to go about it. Doing a whois on a few sample IPs, I can see that they all share the same "netname", "mnt-by", and "origin/AS". Is there a way I can query the ARIN/RIPE database for all subnets using the same mnt-by/AS/netname? If not, how else could I go about getting every IP belonging to this ISP? Thanks.

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • Hyper-V virtual machine can't be migrated to a specific host in the cluster

    - by Massimo
    I have a three-node Hyper-V cluster running on Windows Server 2008 R2 which is working quite flawlessly: there are no errors, live migration works, all hosts can and will happily run all virtual machines, and so on. But one specific virtual machinee is trying to make me go mad: it works on two nodes of the cluster, but not on the third one. Whenever I try to move the VM to that node, be it in a live migration or with the VM powered off, it always fails. In the event log of the host these events are logged: Source: Hyper-V-VMMS Event ID: 16300 Cannot load a virtual machine configuration: General access denied error (0x80070005) (Virtual machine ID <GUID>) Source: Hyper-V-VMMS Evend ID: 20100 The Virtual Machine Management Service failed to register the configuration for the virtual machine '<GUID>' at 'C:\ClusterStorage\<PATH>\<VM>': General access denied error (0x80070005) Source: Hyper-V-High-Availability Event ID: 21102 'Virtual Machine Configuration <VM>' failed to register the virtual machine with the virtual machine management service. All other VMs can be moved to/from the offending host, and the offending VM can be moved between the other two hosts. Also, this is not a storage problem, because there are other VMs in the same cluster volume, and the host has no troubles running them. What's going on here?

    Read the article

  • Improving Performance of RDP Over LAN

    - by Jared Brown
    Architecture: A deployment of 6 new HP thin clients (Windows XP Embedded) with TCP/IP access to several new HP servers (Windows 2003 Server). Each thin client is connected over fiber optic to a Gigabit Cisco switch, which the servers are connected to. There are 10/100 Ethernet to fiber converter boxes on each end of the fiber cables. Problem: Noticeable lag over RDP while using the Unigraphics CAD package. 3D models take .5 to 1 second to respond to mouse actions. Other Details: Network throughput on each thin client's RDP session is 7288 kbps. RDP connection settings - color setting: 15k, all themes, etc. turned off. Local and remote system performance stats are well within norms (CPU, Memory, and Network). Question: Are there newer versions of terminal services or RDP I can use on my existing OSes? Are there compression algorithms, etc. that are well suited for a high-bandwidth LAN? Are there valid alternatives that will yield higher performance (i.e. UltraVNC with drivers installed)? Are there TCP/IP tuning options I can exploit?

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >