Search Results

Search found 4701 results on 189 pages for 'ram moj'.

Page 59/189 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

  • How to use Windows mini-dump files?

    - by ekaj
    I have a Mini-ITX Intel DH61AG mobo w/ an Intel i3 processor and 8GB of 1600MHz DDR3 RAM. Anyways, this computer has been crashing kind of frequently. It is not an OS problem, as I have used Ubuntu (and had kernel panics), Windows 7, and Windows 8 (BSODs aren't going to keep me from tinkering =p) Anyways, each of these OSes have had problems, so I ran a HDD check, and I know it is not a heat issue because I tested the processor for a few days when I first put the computer together. When I ran memtest86+, however, I got an error - so I did individual testing, and both chips came back good, did a really intense test with both of them again (took half a day), and no errors. So, I still think the problem could be RAM, but I am not sure - I tested it pretty extensively (might let it run all night again tonight)... which brings me to my point. Could someone explain to me (in simple terms if possible) how to READ the minidump files of Windows computers? I've tried before with a guide I found online, but failed miserably (can't remember guide, either =/). I'm fine with installing the software, I will probably need it sometime in the future as well. I have seen a few other posts on SU that just ask people to post minidump logs, but I feel as if that is too localized. Would someone be able to explain this? Note: If someone knows how to do this, but doesn't want to explain and is still willing to help me, this is the link for the minidump file =p Make sure to click

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Real-time local backup with versioning on Windows 7/8

    - by Borda
    I'm looking for a reliable backup solution on Windows, something with a feature set similar to Yadis. I've been using CrashPlan for 2 months, but their software lost more than 1TB of my data, that's why I'm looking for alternatives. Requirements: Real-time folder-to-folder backup: I don't need online features, I want to use this to duplicate my files between my local disks. Versioning support: Should be able to choose how many versions to keep of the files. Plain backup: I'd like to be able to open the backup without special software. No proprietary file format. External disk support: Shouldn't have any problem using external disks, at least on the source side. Backup every file, even locked/system files. (Yadis fails this one) Should start automatically, and have a comfortable GUI. Should use actively maintained and/or popular software. I don't want to use discontinued products. Optional requirements: Low RAM usage I'm not really comfortable with something that eats 1GB of my RAM. Compression support Preferably ZIP, but I'm not picky about this. Any ordinary everyday format is acceptable. Freeware or Open Source Preferred, but not necessary. I can do a one-time payment within reasonable bounds. (preferably under $100) I only need Windows support, but if it works on Linux, that's a plus. I've already searched and tried lots of software, most of them failed at the plain backup, the versioning or the locked file backup requirements.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • sql server 2008 cluster hang when a heavy load is run

    - by Billy OT
    we have a sql server 2008 active/active cluster running on wondows 2008R2 O/S. 14GB RAM, 4xCPU. we have set a ceiling of 12GB for sql server. We're running an agent job which loads 3 million records to a database. during this load the job fails and the cluster seems to attempt to fail over to the other node but unsuccessfully i.e., the cluster address is no longer accessible. we have to manually fail the cluster node back. during the load on viewing task manager we can see that memory usage hits a max of 12.5GB and CPU at times hits 100% on all 4 CPU, but for the most part fluctuates at an average of about 60%. I suppose my question is, will a cluster try to fail over if memory or CPU are taking a heavy hit? or am i barking up the wrong tree? also any ideas why it wouldn't fully fail over? we've crawled through logs, of which there are a lot, and can't find anything useful. we've also tried recreating the issue but it ran successfully at a later time. Also 3 million rows doesn't seem like a lot but in terms of resources should 14GB RAM and 4xCPU not be sufficient? Further information on this, we ran the load again today and corrupted the database! We received the error message : LogWriter: Operating system error 170. It looks like, under the heavy load, the sql cluster attempted to fail over and in doing so migrated a lun (or drive) which meant the disk was no longer reachable. (this is just our theory). The database is now 'suspect' and requiring restoration. The 170 error above also indicates that on failing over to the other node, the sql service could not start as it was already in use, therefore it couldn't fail over fully?? But I'm wondering why would it need to fail over in the first place? My assumptions could be completely wrong on this, so any ideas would be appreciated.

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Mainboard shuts itself off after half a second or so

    - by heishe
    Here's the problem: When I start the PC, the mainboard powers up, then stays that way maybe 0.2-0.5 seconds, and then shuts off again. I say mainboard, and not PC, because I removed all the parts from the system and disconnected everything but the mainboard power supply (the broad 12 pin thingy). When I have the other parts (cpu, graphics card, ram, etc.) installed and connected, the basic behaviour stays the same, but now the mainboard runs for about 6 or 7 seconds (this is a guess) before shutting off. This all started when my monitor wouldn't receive a video signal today, without giving POSTs, so I took the graphics card and the RAM out to see if it changes anything. It didn't, except that from that point on the mainboard would start to have this behavior where it just stays on for a very short time and then shuts off again. I already tested it with a backup PSU - same behavior. What could this be? I'm thinking it can't be on a physical level (transistors burned through or something like that), since then the mainboard either shouldn't start at all or it should detect hardware failures in non-essential parts of the syste and start beeping. Sorry, I forgot to mention. It's an MSI P67A-C43. I already checked the capacitors if someone popped, but I can't find anything. I also tried resetting the cmos, but that didn't change anything.

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >