Search Results

Search found 5942 results on 238 pages for 'total starnger'.

Page 115/238 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Which SATA controller is better to use with Windows Server 2008 R2 64 bit?

    - by at8eqeq3
    We're preforming some hardware upgrade on our server machines. As a part of this process, we have to install some additional disk drives. Unfortunately, total number of drives will become 7 or 8, while all of our motherboards have only 6 SATA ports. Obviously, we need additional disk controllers. For now, we've tested Silicon Image (actually, no-name) and Adaptec controllers, both on SiI3132 chip, and got some problems with drivers for them: Silicon Image drivers are not installing at all, and Adaptec ones are installing, but the system says they're not signed and becomes unbootable. We're tryed both drivers from controllers' CD's and latest versions from manufacturers' websites, and no luck. So, can you recommend some SATA controllers that will really work without tambourine dances on mentioned OS (we need just 2 SATA ports and PCI-E, while RAID and any other features are no-matter)? Thanks for any help.

    Read the article

  • Compilation of Etherpad fails in an OpenVZ VE

    - by ulf
    Hi everyone. I’m almost giving up, this will be my last try: I try to compile Etherpad on my OpenVZ server. It’s running a Debian 5.0 as the host system, in the VE I’ve got Ubuntu 10.04. I installed Etherpad in this VE with the instructions from the official Ubuntu Wiki: https://wiki.ubuntu.com/Etherpad. Everything runs fine until it comes to compilation. After calling bin/build.sh as described in the wiki the first steps are running fine. But then I’m running into a memory error: java.io.IOException: Cannot run program "cp": java.io.IOException: error=12, Cannot allocate memory Well, I understand the error message but don’t see the cause. The command free tells me that there’s plenty memory left in this VE: total used free shared buffers cached Mem: 2415236 1140872 1274364 0 0 0 -/+ buffers/cache: 1140872 1274364 Swap: 0 0 0 Beautiful. But even repeating the compilation process doesn’t bring me any further. Any help would be appreciated.

    Read the article

  • Hyper-V CPU Utilization, Good Tools?

    - by yzorg
    I just learned a ton from this post: Host CPU% doesn't include child VM CPU%, specifically I learned that both the 'host OS' and 'child VM' are siblings within the HyperVisor layer. Are there good utilities for 'watching' the total CPU and other resource counters at the HyperVisor (hardware) layer? I know perfmon (watching special Hyper-V CPU counters) is the standard answer, but I've stayed away from perfmon for ad-hoc monitoring. Is there a good OSS or free tools to 'watch' the resource utilization as I create multiple new VMs running on the server? I'm a developer, so if there aren't any good UI tools to surface this data I'd consider creating one, but only if needed. P.S. My specific scenario is I'm creating new web, SQL and back-end server VMs for new Windows 8 Server and SQL 2012 (entire application stack). I need to monitor them for utilization and know when I need to grow beyond 1 host (I'll need to split the VMs into separate hosts as I hit hardware limits of the 1st host, and diagnose problems).

    Read the article

  • Time tracking similar to Paymo Plus on Debian

    - by aditya menon
    PaymoPlus is free (closed source but no fees) PaymoPlus sits in my System Tray all day, and records every window/tab I open I would like to know if a similar app exists for Debian. Paymo for Windows/Mac has the additional sweet feature of being able to drag and drop working windows/tabs and the time spent into the tasks, but one can live without this. I would at least need to know which tasks got how much time in a 'sum total' calculation so I can enter that time into my Paymo reporting. Any ideas? Paymo does have a desktop widget for Linux but it is a dumb (non-sentient) manual time entry tool, not like Paymo Plus automatically recording everything being done.

    Read the article

  • Default Webcam Driver Issues

    - by Omegaclawe
    I'm having troubles getting my monitor-attached webcam (ASUS VK248H) to install on my new computer. On the old computer, it was a matter of not using a USB 3.0 port, but I can't get anything to work on the new one. I have tried all manner of uninstalling/reinstalling the driver and resetting the computer, as well as literally every USB port on the computer (14 in total). It's not that windows isn't recognizing the device; it most certainly is. However, comparing it to the old computer's driver details, on the new computer, it is not using the ksthunk.sys driver in addition to the usbvideo.sys driver, like on the old (working) computer. Naturally, I figured the way ahead was to simply get this other driver to work with the hardware, but haven't really found out a way to do that. Does anyone know of a way I can force it to use ksthunk.sys? It seems rather difficult to get it to install anything when Windows is feeling that everything is peachy.

    Read the article

  • Need help upgrading MacBookPro3,1 RAM to 4GB.

    - by Fantomas
    My questions are: 1) Where to buy it and what to buy? I have heard that this RAM is generic enough and it does not have to come from Apple. 2) Can I reuse my existing stick(s)? Would I have a single 2GB module, or 2 x 1GB modules? 3) If I have 2GB already, is it a good idea to have one old stick and one new one? Which one is better placed at the top and which one at the bottom? Let me know what questions you have. My computer's info: Hardware Overview: Model Name: MacBook Pro Model Identifier: MacBookPro3,1 Processor Name: Intel Core 2 Duo Processor Speed: 2.4 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache: 4 MB Memory: 2 GB Bus Speed: 800 MHz Boot ROM Version: MBP31.0070.B07 SMC Version (system): 1.16f11

    Read the article

  • vi visual mode doesn't work

    - by BobMarley
    I'm running vim (7.0.237) after sshing to a remote CentOS box, and it just won't enter visual mode. When I press 'v', it just beeps and does nothing. I'm running Ubuntu with GNOME Terminal, and the local copy of vi works fine, so I don't see how this could be a problem with the terminal. I have the same .vimrc file on the local and remote machines, and the only settings are: set nocompatible; set tabstop=4. I'm at a total loss here, any ideas?

    Read the article

  • Ubuntu 14.04 says insufficient memory in my /boot memory alocation while updating

    - by Aravind Dollar
    I am new to Linux platform. I just tried installing Ubuntu alongside windows but allocated only 200 mb to the /boot partition which is not recommended. Now Ubuntu software update keep on insists me no enough space. What should I do? Is there any way to increase my /boot partition without removing the total OS? Or should I completely uninstall Ubuntu and put it back? Please anyone suggest me over this issue. I have no idea what to do. And please be elaborate, as I am new to Linux environment.

    Read the article

  • Painless way of consolidating files across multiple machines/OSes

    - by 5arx
    Just bought a NAS. So I thought I'd get all our photos, media files and pdfs consolidated, de-duplicated, de-junked and virus-checked and stick them all on it. We have 3 laptops, one running Windows, the others OSX. We have a file server running Windows - it was the result of an earlier attempt at a networked fileserver - and a Mac Pro that is also kind of a server (previous attempts at this job have resulted in most of our stuff being on it). Also memory cards/sticks, cd backups and so on. I would be grateful if anyone could suggest a strategy or, ideally, tool(s) I could use to solve this problem. It is probably no more than one or two terabytes of data in total, but I can imagine that going through it all manually, file-by-file may well drive me insane.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Backup of images

    - by Sam Kong
    I've just installed a Ubuntu for a file server. It will share a folder (samba) and employees of my company will save photos on that. Currently the total amount of the photos is about 100GB and every day 20MB will be added. My question is about backup plan. I want to backup the photos to a remote server using a cron job. I can think of 2 things. rsync git Image files won't be changed so rsync will do. But as people say, I must git all my data. What would you do? Thanks. Sam

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

    - by jonathanconway
    I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it. I want to copy everything to an external USB hard drive (1.5 tb capacity). The desktop PC runs Ubuntu. To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive. It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data. So I'm wondering if there's a quicker way of doing this. Would it be better to copy it in portions of 500MB or so, rather than all at once?

    Read the article

  • Amazon EC2: how to find out detailed CPU usage?

    - by j0nes
    I am running several EC2 instances, and I want to know the exact work my CPU is doing. On "normal" machines I am doing this with munin and its CPU plugin which looks at the statistics provided by /proc/stat. On my EC2 machines however, I get incorrect graphs. The machine has two cores, so the max CPU usage should be 200% - however it gets as high as 400%: I know that I should use Amazon CloudWatch to see the total CPU usage (and this is the official and recommended from Amazon way to do this), but I am specifically looking on how the CPU usage is spend (e.g. system, user, iowait). Is there a way to get detailed CPU usage statistics on EC2 instances?

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • Windows DFS Limitations

    - by Phil
    So far I have seen an article on performance and scalability mainly focusing on how long it takes to add new links. But is there any information about limitations regarding number of files, number of folders, total size, etc? Right now I have a single file server with millions of JPGs (approx 45 TB worth) that are shared on the network through several standard file shares. I plan to create a DFS namespace and replicate all these images to another server for high availability purposes. Will I encounter extra problems with DFS that I'm otherwise not experiencing with plain-jane file shares? Is there a more recommended way to replicate these millions of files and make them available on the network? EDIT: I would experiment on my own and write a blog post about it, but I don't have the hardware for the second server yet. I'd like to collect information before buying 45 TB of hard drive space...

    Read the article

  • How do I recover a RAID 1 volume on Mac OS X (10.7)?

    - by Avry
    I have a Synology NAS that I've set up with RAID 1. The device is set up with two drives, both the same size (i.e. 500 GB each), formatted in ext3, as a RAID 1 volume (i.e. even though the total capacity is 1TB, I effectively only get 500 GB). In the case of a device failure where I can only access one of the drives, how can I recover my data? The solution I'm looking for is something like: 'Put the working drive in an enclosure, and use <some software> to recover your data.'

    Read the article

  • Why is process not being displayed by TOP

    - by drN
    I am running a Mathematica script (this question probably doesn't fit in Mathematica.SE however) and I know that it generally takes up a lot of RAM and loads up my cores. However, althought pgrep MathKernel is showing a pid, I find that top doesn't show this in the top processes, although I notice that it is taking up about 2.25GB of the 8GB available to me. pmap -x my_process_id total kB 2243132 1907404 1892108 AND ps aux | grep MathKernel dnaneet 20837 12.6 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mathematica/8.0.4/SystemFiles/Kernel/Binaries/Linux-x86-64/MathKernel -runfirst $TopDirectory="/share/apps/mathematica/8.0.4" -script ./dcm_10micrometer_2x -- ./dcm_10micrometer_2x ps aux shows that the process is taking about 12% (In asterisks) dnaneet 20601 0.0 0.0 68264 1660 pts/1 Ss 09:15 0:00 -bash **dnaneet 20837 12.2 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mat** dnaneet 21922 0.0 0.0 65604 948 pts/1 R+ 10:29 0:00 ps -aux Did this process fail and is the MathKernel just lingering?

    Read the article

  • How do I make a backup of a live server?

    - by Jurily
    At my new job, I have a production server with the following qualities: Windows (XP I think), ancient hardware Absolutely vital database No backups whatsoever Everyone in the company has full admin rights, the passwords are stored in a .txt on the global share No installers, except for the OS The machine itself is sitting on a wooden shelf 5 feet above the ground against an external wall with frequent truck traffic on the other side; the shelf is already bent from the constant load Hasn't been rebooted in $DEITY knows how long, my predecessor wasn't even sure if it would survive it UPS is installed, but since everything is hooked up to it, it would last 10 minutes tops No spare parts or hardware budget How do I make a full backup with minimal impact on the server? I'm not sure how close it is to a total meltdown. For all I know, plugging in a USB stick could kill the company, and of course it will be all my fault, since "it was running fine before you touched it". The ideal solution would be a VM, so I have a test environment as well (separate of course).

    Read the article

  • Raid 5 with 4 disks on Debian automatically creates a spare drive

    - by Razer
    I'm trying to to create a RAID 5 with 4x 2TB disks on Debian 6. I followed the instructions from: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm I created the raid with following command: sudo mdadm --create --verbose /dev/md0 --auto=yes --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 After creating the RAID mdadm --detail /dev/md0 shows me: /dev/md0: Version : 1.2 Creation Time : Mon Jun 11 18:14:26 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 11 18:14:26 2012 State : clean, degraded Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : rsserver:0 (local to host rsserver) UUID : a68c3c99:1ef865e9:5a8a7bdc:64710ed8 Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 65 - spare /dev/sde1 Why is there a spare drive? I didn't create one. I don't want to use a spare drive.

    Read the article

  • What is the pixel clock setting on my monitor actually doing?

    - by codecowboy
    I am experiencing display interference on a dell 24" flat panel monitor.I find that if I adjust the pixel clock settings up or down in the monitor's on-screen menus, the interference goes away for a while. The monitor is attached to a Macbook Pro using a mini display to VGA adapter. I have found that in a different house, I get the interference problem less so it might be related to electricity supply or possibly even ethernet powerline (total guess). What does the pixel clock setting actually do and does this behaviour point to a likely cause of the interference?

    Read the article

  • Ubuntu Linux: Process swap memory and memory usage

    - by David Halter
    My Ubuntu eats more memory than the task manager is showing: sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}' 1043.84 free -m total used free shared buffers cached Mem: 3860 1878 1982 0 20 679 -/+ buffers/cache: 1178 2681 Swap: 2729 1035 1693 That's strange. Can someone explain this difference? But what is more important: I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process. I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options? I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.

    Read the article

  • How do I take some RAM and use it towards Dedicated video memory for my Nvidia graphics card?

    - by Noah Rainey
    I have a Nividia GeForce 6150SE nForce 430 graphics card (so it's quite old), it only gets 64MB of dedicated memory by default. I went into the bios and see if I can increase it, but it wouldn't let me. However, from the Nividia control panel I see I have up to 1071MB of total available graphics memory. I'm not sure what that means and I'm not sure how I can harness this memory and use some RAM for my graphics card. Can someone explain if this is possible and if so, how?

    Read the article

  • Web Farm Application deployment best practices

    - by rauts
    Hi All, We are having a web farm which hosts multiple ASP.Net applications. We typically have 4 servers on the farm. The dilemma which i am having is in terms of capacity issue of the farm. Lets say i have currently got 200 apps in total. Should I deploy all 200 apps on all 4 servers (i.e. all the servers in the farm are identical) or should i split the applications between 2 sets of server and create 2 smaller farms so that i can then manage the application based on its criticality and usage etc. Any best practices in this area would be highly appreciated. Thanks Rauts

    Read the article

  • Exchange 2007 mailbox reassignment

    - by John Virgolino
    I am trying to move a mailbox from one user account to another within a single AD domain. We are using Exchange 2007. I have followed these steps: Disable the account From "Disconnected Mailbox" container, connect to the new account I get a success message When I try to login to OWA using the new user account, I get this message: Outlook Web Access could not connect to Microsoft Exchange. If the problem continues, contact technical support for your organization. Request Url: https://mail.somedomain.com:443/owa/default.aspx User host address: 1.2.3.4 Exception Exception type: Microsoft.Exchange.Data.Storage.ConnectionFailedTransientException Exception message: Cannot open mailbox /o=cgsexchangeorganization/ou=exchange administrative group (fydibohf23spdlt)/cn=recipients/cn=someuser. NOTE: I changed some identifying information for security purposes. I have tried multiple times and get to the same place. When I login to OWA with old account, I get an error that the mailbox cannot be found, which makes total sense. Does anybody have any ideas on this? Thanks!

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >