Search Results

Search found 8190 results on 328 pages for 'switch'.

Page 286/328 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Server 2008, 2 NICs, 2 fixed IPs - big delays using internet

    - by user46055
    Hi geniuses I have an all in one Windows 2008 server, configured with AD/DHCP/DNS/RRAS - all set up with wizards and no specific tweaking. The server has 2 network adapters : one of which ("MyWAN") is plugged into our office's internet connection, the other ("MyLAN") is plugged into a local switch, which is also where all our desktops are connected. So this one server is doing everything. When first set up, MyLAN had a fixed IP of 192.168.2.1 and served the desktops with DHCP scope 192.168.2.50-99. It also told them to use 192.168.2.1 as DNS and gateway. MyWAN was setup to take its IP etc from DHCP, being handled by the building's router and ADSL modem etc. All desktops were setup to use DHCP. This all worked perfectly fine, until I recently changed MyWAN to have a static IP (I wanted to access it from home, and needed to give it a static IP to port map in the building's router). Things still work, but there is now a long delay when accessing the internet. The actual speed is as before when downloading, but there is a pause of 3-6 secs when connecting to new hosts (for example if I browse to slashdot from either a desktop or the server itself, it'll hang on connecting to slashdot.org, hang again on connecting to *.fsdn, *.google-analytics.com and all the other hosts referenced from the main page). If I ping slashdot.org from the server, I get the following : Pinging slashdot.org [216.34.181.45] with 32 bytes of data: Reply from 192.168.2.1: Destination host unreachable. Reply from 216.34.181.45: bytes=32 time=99ms TTL=239 Reply from 216.34.181.45: bytes=32 time=100ms TTL=239 Reply from 216.34.181.45: bytes=32 time=101ms TTL=239 Pinging anywhere external always seems to hit 192.168.2.1 first, which doesn't seem right. Trying tracert from the server gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 MYSERVER01.intranet [192.168.2.1] reports: Destination host unreachable Trying tracert from a desktop gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 <1 ms * <1 ms MYSERVER [192.168.2.1] 2 * * * Request timed out. 3 6 ms 6 ms 6 ms dsl-gw1.ge.mer.uk.webtapestry.net [217.151.111.17] 4 38 ms 239 ms 251 ms gw-router.ge.mer.uk.webtapestry.net [217.151.111.13] ...and then all is fine after that. I think that DNS is working fine because the domain names are getting translated to correct IPs immediately. DHCP seems to be okay? So perhaps it's something up with my RRAS setup - although I can't see any option during the setup wizard which I would have filled in differently. I've also tried changing the binding order of the two network connections, to prioritise MyWAN, but that doesn't seem to have done anything. Any idea what's up? Many thanks - Rob

    Read the article

  • Why do some games randomly turn my screen a random solid color?

    - by Emlena.PhD
    When playing some games my computer will randomly have an error that I cannot fix without turning it off and back on again. The screen changes to one solid color, which varies (off the top of my head I can remember seeing solid green, magenta, etc..) and the sound blares a single tone. The sound sometimes briefly restores and I can still hear the game sounds and even hear and still be heard by people in my Mumble channel, but the screen doesn't right itself so I'm still blind. What's more is this happens in some games but not in others. While the game is actually running, not while I'm still in the menu. However, it does happen if I'm afk or idle but the game world is still rendering. Games where the error occurs: League of Legends World of Warcraft Trine The Sims 2 Dungeon Defenders Safe games: games where it has never occurred: Tribes: Ascend Star Wars: the Old Republic Battlefield 3 So relatively older games cause the problem while newer games do not? I cannot predict when it will happen, it just seems random. However, if it happens and I try playing the same game further after restart it does appear to occur more frequently after the first time. But if I switch to a safe game it doesn't continue happening. Both of my RAM sticks appear fine, flipped position or either one on their own and games still run, computer still boots. I would think over-heating, but then why not all games? ALso, sometimes it happens immediately after I start playing, within seconds of the 3D world booting up. I'm looking to upgrade very soon so I want to figure out what component or software is fubar and replace/repair it. Any suggestions or recommendations of tools would be helpful. Below is some system information. Dxdiag does not detect any problems. Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120305-1505) System Manufacturer: Gigabyte Technology Co., Ltd. System Model: EP45-UD3R BIOS: Award Modular BIOS v6.00PG Processor: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz (2 CPUs), ~3.2GHz Memory: 4096MB RAM DirectX Version: DirectX 11 DxDiag Version: 6.01.7601.17514 64bit Unicode Graphics card name: NVIDIA GeForce GTX 285 Driver Version: 8.17.12.9610 (error has occurred w/several driver versions) Sound: I do not have a sound card, been using motherboard's built in sound)

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Is DHCP lease expriring years from now okay?

    - by sharptooth
    I'm reviewing Azure web role logs and there's output from ipconfig /all IPv4 Address. . . . . . . . . . . : 10.61.145.37(Preferred) . Subnet Mask . . . . . . . . . . . : 255.255.254.0. Lease Obtained. . . . . . . . . . : Monday, September 24, 2012 12:26:00 PM. Lease Expires . . . . . . . . . . : Thursday, October 31, 2148 6:55:12 PM. you see, the lease expires in year 2148 but my VM will likely not run for more than one month - when I deploy the new version of my code I first deploy it to new VMs, then switch traffic, then release the new VMs. In general such usage pattern is normal - VMs typically live from several dozen minutes to several weeks on Azure. I suspect the lease that long will cause problems on the internal Azure network sooner or later. Is such long DHCP lease okay or is it likely a misconfiguration?

    Read the article

  • Chrome Problems on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites! Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). Is anybody else experiencing such issues? Does anybody has a solution to any of these? Any Help is appreciated! Thank You!

    Read the article

  • Is it possible to "stealth" dual boot a machine?

    - by BrianH
    I have a loaner laptop that has MS Windows with locked down permissions. It works okay for what I need to do, but I started wondering if there was a way to install a separate Windows OS on a separate hard drive to do what I want to do on it. Virtual I wish I could use VirtualBox or VMWare, but that is not an option (I even tried VBox portable). External Drive My next trial was see if it was possible to install Windows on an external drive, and then plug that drive in and boot from it whenever I wanted my own OS. After a few Google searches, I see that is not really a possibility. Swap Primary Drive Another option, would be to get a second internal hard drive, take the existing HD out, and install a new Windows OS on the secondary HD. This would mean swapping the internal hard drive each time I want to switch OSs - doable, but not very convenient. Dual Boot The laptop has an expansion slot where a second hard drive can be plugged in quickly. I thought about Dual booting, but I don't want to mess with the MBR on the primary hard drive. When I have to give the laptop back, I don't want a dual-boot screen to popup. Summary Is there a way to have 2 hard-drives on a machine, each with it's own OS, and maybe use BIOS settings to have only 1 hard drive active at a time? That way both hard drives could be physically connected, but only one would actually be active at a time. I basically want a second OS that does not (can not) affect the existing OS in any way, and can be removed at any time without affecting the existing OS. The secondary OS does not need any of the files on the main hard drive - it's basically like having 2 separate computers using the same hard ware... Is this possible, or would it be easier just to go out and buy a different laptop? Thanks in advance! EDIT I just discovered that my BIOS allows me to pick (at startup) which hard drive I want to boot from. I poked around in the BIOS and there is not a place to disable certain devices, like the primary hard drive. My only concern about plugging in a second hard drive and installing Windows to the second hard drive is that it will mess with the primary hard drive, or add a bootloader screen to pick which windows install to use. My thought would be to physically unplug the primary, plug in the secondary and install windows to the secondary. After the install is working properly, I can plug the primary back in and use the BIOS feature to determine which drive to boot to. Is there any way after I have 2 separate installs on 2 separate hard drives that one of the installs could mess with the MBR on the other drive?

    Read the article

  • How is DNS used by individual processes?

    - by atroon
    When resolving FQDNs or machine names to IP addresses on my local network (mycompany.internal) I can use dig on the command line (linux/mac) or nslookup (windows) to query the configured server and get a response. But trying to enter the FQDN or even just the machine name in a ping command or in a web browser results in 'Unknown Host' or DNS errors. Here's a sample, this one from the Mac: mac:~ atroon$ dig server.mycompany.internal ; <<>> DiG 9.6.0-APPLE-P2 <<>> server.mycompany.internal ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5219 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.mycompany.internal. IN A ;; ANSWER SECTION: server.mycompany.internal. 1200 IN A 172.16.254.36 ;; Query time: 0 msec ;; SERVER: 172.16.254.8#53(172.16.254.8) ;; WHEN: Wed Dec 16 11:39:15 2009 ;; MSG SIZE rcvd: 55 mac:~ atroon$ ping server.mycompany.internal<br> ping: cannot resolve server.mycompany.internal: Unknown host I cannot for the life of me figure this one out. The DNS server is a SBS 2003 box which handles AD, some file/print, etc for a small company network. This issue happens to me about three times a week, and when I'm connected to the local network directly, the same switch as the server even. I can make any connection I want with IP addresses, I just can't make DNS work. Additionally, at the same time I'm experiencing this, other users are fine, which makes me think it's a problem on my Mac. But what sort of problem? How can dig send a query and get a reply, and ping say 'unknown host'? I'm posting here vs. serverfault because I think this is a local problem not a server problem...but if anyone can point me at the server, I guess we'll head down the street a domain or two.

    Read the article

  • Old CRT screen's high resolution doesn't work anymore on windows 7

    - by Mixxiphoid
    One year ago I decided to switch from Windows XP to Windows 7. I had a 17" CRT monitor with a screen resolution of 1600x1200 which worked fine on Windows XP. While installing Windows 7 everything went well until Windows 7 was going to install the video card its driver. Windows 7 puts the screen to its recommended resolution and my screen became black. I waited a few minutes to be sure the installation was finished. I turned off the computer by hand and restarted the computer on a resolution of 800x640. When windows 7 was done installing I went to screen resolutions and the resolution of 1600x1200 was on the top of the list with '(recommended)' next to it. I tried putting it on 1600x1200 but again my screen went black. I installed all windows 7 updates including the video card driver from the NVidia site (NOT from Windows 7). I tried about everything to make it work on 1600x1200 but with no succes. The highest resolution I got with the crt monitor was 1280x1024. I had a TFT screen which had 1280x1024 as max resolution and had better colors, so I used that one till today. My video card is 9600GT and my power supply is beyond sufficient. I even tried to install the driver I had on XP to see if it worked, but no results. I tried classic mode on windows 7, changed the dpi, the frequentie and the monitor settings, but nothing worked. I really like a vertical resolution of 1200, but it seems today I'm bound to all those standard monitors with a resolution of 1980x1024... Can anybody explain to me what the cause is that it worked on Windows XP but not on Windows 7? And maybe a solution to the problem (I actually gave up on getting it fixed...) Thanks a lot in advance. SOLUTION I downloaded the according monitor driver and installed it. Next I rebooted my computer on low resolution (800x640) and connected the CRT monitor. When Windows 7 booted successfully I went to computer management and 'update the driver' of my monitor. I manually selected 'Generic PnP monitor' and made that one active. I want to advanced settings at 'Screen resolution' and selected the mode '1600x1200 (32-bit) 80 Hertz (95 Hertz did not work). Now I had my resolution on 1600x1200. I repeated the earlier step to select the original monitor again instead of the Generic monitor. Quite a way to solve this problem, but it worked! Thanks a lot you all.

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • Why did Intel drop the Itanium?

    - by Cole Johnson
    I was reading up on the history of the computer and I came along the IA-64 (Itanium) processors. They sounded really interesting and I was confused as to why Intel would decide to drop them. The ability to choose explicitly what 2 instructions you wanted to run in that cycle is a great idea, especially when writing your program in assembly, for example, a faster bootloader. The hundreds of registers should be convincing for any assembly programmer. You could essentially store all the functions variables in the registers if it doesn't call any other ones. The ability to do instructions like this: (qp) xor r1 = r2, r3 ; r1 = r2 XOR r3 (qp) xor r1 = (imm8), r3 ; r1 = (imm8) XOR r3 versus having to do: ; eax = r1 ; ebx = r2 ; ecx = r3 mov eax, ebx ; first put r2 into r1 xor eax, ecx ; then set r1 equivalent to r2 XOR r3 or ; SAME mov eax, (imm32) ; first put (imm32) into r1 xor eax, ecx ; then set r1 equivalent to (imm32) XOR r3 I heard it was because of no backwards x86 comparability, but couldn't thy be fixed by just adding the Pentium circuitry to it and just add a processor flag that would switch it to Itanium mode (like switching to Protected or Long mode) All the great things about it would have surly put them a giant leap ahead of AMD. Any ideas? Sadly this means you will need a very advanced compiler to do this. Or even one per specific model of the CPU. (E.g. a newer version of the Itanium with an extra feature would require different compiler). When I was working on a WinForms (target only had .NET 2.0) project in Visual Studio 2010, I had a compile target of IA-64. That means that there is a .NET runtime that was able to be compiled for IA-64 and a .NET runtime means Windows. Plus, Hamilton's answer mentions Windows NT. Having a full blown OS like Windows NT means that there is a compiler capable of generating IA-64 machine code.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Partition/install issues

    - by jalal ahmad
    I am new to Ubuntu and tried to install 10.1 as dual boot option from a USB. At first I encountered the error when in partition dialogue of installation process that cannot find root directory. I did a search on Ubuntu forums and did this as in one of the posts. Make sure that the partition file system you wish to install Linux, Ubuntu or Backtrack on it is ext4, ext3 or ext2, and not FAT32 or NTFS. Then mount / on it: During the installation process press "change" on the partition you wish to use Make sure "do not use this partition" scroll is not chosen, scroll to ext4, ext3 or ext2 On the "mount" field write / Click ok, then next a message will appear saying something like "swap area was not defined, do you wish to continue or choose a swap area?", click "ok" and continue or click "go back" and choose another partition and click change, on the file system scroll choose "swap" and click "ok" and next All good but when I rebooted I could not find Windows vista as in dual boot option. Plus I could not see wireless networks and in the process of trying to find out what went wrong the soft switch somehow turned off and as I cannot boot in Windows I have no idea what to do. Again searching internet I found a post which said the dual boot problem can be overcome by installing gparted but when I tried I got the message Reading package lists... Done Building dependency tree Reading state information.. Done E: Couldn't find package gparted I thought I am going to copy my stuff from my hard disk and try to install Windows but I found out that I have two partitions which are different from what I had before installing Ubuntu. I now have filesystem partition1 119 GB ext4, swap partition 5 1.1 GB swap and extended partition 2 1.1 GB. And I cannot mount 119 GB where all my personal videos, photos are if still there. Now I cannot boot from Windows even. Need help on what to do? Best case scenario would be to be able to copy my stuff before I mess up the system further. Else a dual boot system and if not then how do I install vista again. I have Windows CD. Cheers guys and thanks in advance.

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • Routing between 2 different subnets on 2 different interfaces in SonicOS

    - by Chris1499
    I'm having a bit of a problem allowing traffic between two of my subnets. Here's the structure I've built. The X0 interface has our windows server on it and it handles DHCP/DNS, etc. X1 has the WAN connection. The Sonicwall is handling DHCP on X2. The X3 interface is connected to a different vlan on the 48 port switch. The Sonicwall is handling DHCP on this network as well. So here's what i want to do. The network on X2 is for our guest wireless; i don't want it to be able to access any of the other networks, just the internet, so i that all blocked in the firewall. No issues there. The X3 network is going to be for programmable controllers, and needs to be able to access the X0 network where our computers are. This is where my problem is. I'm not able to get between the 192.168.2.xxx and the 192.168.1.xxx on interfaces X0 and X3 respectively. I have these rules set up in the firewall. The Lan Primary Subnet is the 192.168.2.0 on X0. So if i'm not mistaken, this will allow traffic between the two through the firewall. Now this is where I'm a little confused. Do i need to use NAT to get the traffic from X0 to go to X3 (and vice versa), or a static route, or both? Currently i have both, though i doubt they're done correctly (also in screenshot). I've tried to ping between the two without luck. Any advice, or if you see what's wrong with my setup, is much appreciated. If you need some more information, let me know. Thanks all! EDIT: So i found that i don't neither either NAT or a static route, that the setting in the firewall is enough. I can now ping from the 192.168.1.xxx network, however i can't access the server on the 192.168.2.xxx network. When i try to access i get "An error occured while reconnecting to Z: to server Microsoft Windows Network: The local device name is already in use. This connection has not been restored. What am i missing?

    Read the article

  • new PC not work with existing router, but works fine when directly connecting to cable modem

    - by user34786
    I bought a new desktop PC (eMachine ET1331G-03W from WalMart) with windows 7 installed, but I can not access internet by connecting to my existing wireless router(LinkSys BEFW11S4) with wired cable. Though all other existing desktops and laptops have no problem connecting to the same router. However, the new desktop PC works fine and able to connect to internet if I bypass the router and directly hook up with the cable modem. At new PC when connecting to the router, I got the below information by typing ipconfig, the IP address looks wrong to me: autoconfiguration IPv4 Address: 169.254.71.140 subnet mask: 255.255.0.0 default gateway: (empty) NetBIOS over Tcpip: Enabled Typing ipconfig at all other desktop and laptop have values like below, which are good to me: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.1.140 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 The wireless router was on 192.168.1.1, I do not know why the new desktop got 169.254.71.140 IP? It should have something like 192.168.1.xxx, and it was configured to automatically get IP by DHCP. I have tried to switch cables,power off cable modem, router and reboot new pc many times and got no luck. So I believe this is only an issue related to router or new pc configuration. Can someone help me figure out the issue?

    Read the article

  • PC will POST whenever feels likes it

    - by kyrpas
    I'm really sick of my PC and I'd love to throw it off the 5th floor but unfortunately I don't have this luxury right now. The issues started when I moved to a new house about 2 months ago. I didn't have this problem before. Case: Arctic Cooling Silentium T1 with embedded Fusion 550 Eco 80 PSU. M/B: ASRock A790GMH/128M Gfx: ATI Radeon HD 5770 Here's what's happening almost on a daily basis: I wake up in the morning, switch on the PC and all the fans start spinning. 9/10 the graphics fan stays on 100% and I know it won't post. If I'm lucky, ATI's fan stays on full power for a second, then goes back to normal and I get a normal post but that doesn't happen often. No, instead it's just drives me crazy. When I get no POST I'm trying a lot of different things and what bothers me the most is that they all work. But not always. No... That way I could find out what the hell is going on and we don't want that.. right? So, sometimes it manages to POST if I: remove the keyboard remove the power cable for a few minutes remove the graphics card remove the HDD cables do nothing, just turn it on and off a few times Sometimes it doesn't POST even if I do all of the above. And I end up removing all power cables from the M/B, and connecting all the stuff one by one. Sometimes it works, sometimes it doesn't and I just have to pray and wait. What the hell is that? I'm getting pissed of again just thinking about it. The only solution is to leave it on 24/7 but I don't want to do that. It should be able to turn on and off when I press the power button. I'm not asking much. I'm starting to think there's some weird electricity/power issue but I really don't understand what it is. There's no logical explanation about it. At least I can't find one. Any ideas?

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >