Search Results

Search found 7072 results on 283 pages for 'peopletools 8 50'.

Page 125/283 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Putting servers inside a refrigerator? [closed]

    - by Muhammad Jamal Shaikh
    It could be a silly question, but I decided to go for it. I shall be buying 3 servers in the next few weeks to set up a small webfarm at my home. I am told by different people who work in server rooms, that I should keep my servers in an air conditioned room. Which is really expensive, because the temperature here in south asia is between 10 to 50 degrees C. Here comes the funny part: I have an extra fridge in my home, why shouldn't I put the servers inside that fridge? Benefits: I don't have to buy the air conditioner. I don't have to buy the rack mount for the servers. The electricity consumed by the fridge is much much less than AC. Give me your suggestions!

    Read the article

  • Inbound connections using Internet Connection Sharing in Apple/Mac/Leopard

    - by tlianza
    I have a Mac mini which I'm using to give some other devices wireless access, by sharing it's Airport connection with the local ethernet, and that is plugged into a switch. All devices can get online no problem. (See how: http://www.macosxhints.com/article.php?story=20041112101646643 and http://www.macosxhints.com/article.php?story=20071223001432304 ) The issue is that I need to be able to connect in to these machines as well (at least, for the Slingbox to work). All the devices have 192.168.2.* addresses, and the rest of my local network is on 192.168.1.*. I tried setting a static route so that the 192.168.2.* addresses would use a gateway of 192.168.1.50 (my mac mini's address) but that didn't seem to help. Does anyone know if what I'm trying to do is possible? I admit I'm not certain what Internet Connection sharing is really doing under the hood... perhaps it just does basic nat, and doesn't do the type of routing I'm looking for. If so, anyone know if this is possible?

    Read the article

  • Commit charge peak higher than system limit

    - by Grubsnik
    We are seeing some very strange behaviour on our servers and google didn't turn up anything usefull, so I'm tossing it out here. A standard server is configured with 4GB Ram, 2 4GB pagefiles and running windows server 2003. The servers are running 50-120 vb6/.net applications which normally consume no more than 100mb of memory, but will occasionally run up to 300 mb. The issue with a single process spending way too much memory is being traced down somewhere else, but the thing that is baffling us is that the reported peak charge is vastly higher than what we have available. As the image above shows, we are getting reported peaks that are way higher than what the system is actually capable of delivering. This number has been seen as high as 29GB, which makes no sense at all for a system with a limit of 12GB. Does anyone have an idea what is going on?

    Read the article

  • Distributed Server Monitoring Solution

    - by MaterialEdge
    I belong to an independent IT firm that manages and maintains about 50 business clients networks, ranging from small 5 system networks to 200+ systems. Because we are unable to directly monitor each server at these locations (distributed over a very large area) on a regular basis I am looking for a method to monitor and alert us to any problems that may arise so that we can respond quickly with, hopefully, preventative measures. I'm not sure what solutions are available for this type of situation, but something that utilizes a central server at our business with all client servers sending alerts or logs to it for daily monitoring might work best. All these servers are running a Windows Server OS. In your opinion, what would be the best course of action to accomplish this?

    Read the article

  • How to determine non-movable files in Windows 7?

    - by David
    Is there a way to determine which unmovable files are preventing Shrink Volume from releasing the full potential free space? Background: I have a 90 GB partition with Windows 7 on it, and 60 GB free space. I want to shrink it down to about 40 GB, and use the reclaimed 50 GB for a separate data partition. The Shrink Volume tool in Disk Management is only willing to give me 8 GB back. My understanding is that this is because of immovable files. I've followed the instructions found here, which involved disabling hibernation, pagefile, system restore, kernal dump, making sure all related files were deleted, and defrag'ing. I have successfully followed those same instructions before on this same drive, and partitoned the original 150 GB space into 90 GB and 60 GB, but I'm not so lucky this time.

    Read the article

  • putting servers inside a fridge! [closed]

    - by Muhammad Jamal Shaikh
    hi , i think its a silly question , but i decided to go for it. i shall be buying 3 servers in next few weeks for setup a small webfarm at my home. i am told by different people who work in server rooms , that i should keep my servers in a Air Conditioned room. which is really expensive.because temperature in south asia is b/w 10--50(Centigrade). here comes the funny part, i have an extra fridge in my home , why shouldn't i put the servers inside that fridge. here are benefits listed i dont have 2 buy the air Conditioner i dont have to buy the rack mount for the servers the electricity consumed by the fridge in much much lessor as compared to an AC be free to give your suggestions :) thanks Jamal.

    Read the article

  • Managing self-updating Windows software in GPO-deployed packages

    - by Paul
    Being very new to Windows software distribution for a small network (<50 clients) I was wondering how software packages like Adobe's Reader or Java are handled. I can deploy them as MSIs via group policies just fine. But what happens when the client software detects updates? What are common ways to handle this? Disable the software's autoupdate feature? Redeploy when the admin detects a new version? Just fishing for knowledge, thanks for any hint.

    Read the article

  • rsync does't work

    - by jspooner
    I set up a new ec2 ubuntu server and I'm unable to use rsync to push a file up. I can ssh to the machine with my keypair. I'm not sure why this looks like it works but finishes in half a second and there is nothing in /home/ubuntu on the server. ? ~ rsync -av -i ~/.ec2/my-keypair ~/Downloads/pushcom.2012-06-26T01-10-04.gz [email protected] building file list ... done >f..t.... gsg-keypair >f..t.... pushwoodcom.2012-06-26T01-10-04.gz sent 3624 bytes received 64 bytes 7376.00 bytes/sec total size is 3392 speedup is 0.92 I've tried 50 different ways of the rsync command but I can't get anything to work. Please help! Thanks

    Read the article

  • Excel Pivot table: Calculated field based on only the first row of a group

    - by Meysam
    I've got the following data and pivot table: The Total column in the pivot table is the sum of the following calculated field: =start-TIME(7, 30, 0) I know that this calculation is wrong for what I want to achieve. I need to know how much delay I have had on each day to start the work. e.g. on 1-Oct-12, assuming I should have started my work at 7:30, 8:00 - 7:30 which yields 30 minutes delay, 1 hour delay for 2-Oct-12 and 50 minutes for 3-Oct-12. So my question is, how can I have a calculated field based on only the first row of each group in a pivot table?

    Read the article

  • Transparently cache files from a network drive in Linux

    - by Vadim
    We have a Linux server that reads files from a network drive and processes them. In a common scenario, a user will log in and access the same files over and over again. The size of the files varies but the larger ones can be around 50+ Mb. The files seldom change. I was wondering if it's somehow possible to transparently cache the files. I don't want (or can) change the program the reads the files, nor do I control the protocol by which the files are accessed. I just want something to detect that I access a certain path, copy the file locally (if needed) and then read the file from the local drive. I've read about Bcache but can't figure out if it's what I need. Do you have any suggestions? Thanks, Vadim.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Why has my Mac been running fsck_hfs for two days now?

    - by Nate
    I first noticed that fsck_hfs was running, taking up 50-75% of a CPU, yesterday. It continues to run today. ps shows that it is doing /sbin/fsck_hfs -f -n -x -E /dev/disk3. Only problem: I don’t think I have a /dev/disk3. Why is it running? Will it ever finish? Can I kill it? What is /dev/disk3? Could it be my Time Machine volume, which is not mounted at the moment? System Info: MacBook Pro (2008). It has two disks installed—the internal disk (/dev/disk1) and a PC Card SSD (/dev/disk0, surprisingly). It connects to a remote Time Machine volume attached to an Airport Extreme base station.

    Read the article

  • 550 operation not permitted using FTP

    - by monkey_boys
    I'm using FTP to manage some files on a site I run but keep seeing this (truncated) error log: Command: DELE calendarpermission.php Response: 550 calendarpermission.php: Operation not permitted [...] Command: DELE button_down.gif Response: 550 button_down.gif: Operation not permitted Command: CWD /domains/example.com/public_html/admincp Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html/admincp" is the current directory Command: RMD control_examples Response: 550 control_examples: Operation not permitted Command: CWD /domains/example.com/public_html Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html" is the current directory Command: RMD admincp Response: 550 admincp: Operation not permitted Status: Retrieving directory listing... Command: PASV Response: 227 Entering Passive Mode (122,155,5,50,138,244). Command: MLSD Response: 150 Opening ASCII mode data connection for MLSD Response: 226 Transfer complete Status: Directory listing successful Status: Set permissions of '/domains/example.com/public_html/admincp' to '777' Command: SITE CHMOD 777 admincp Response: 550 CHMOD 777 admincp: Operation not permitted What do I do to solve this?

    Read the article

  • How to find spyware dll launched using svchost.exe

    - by Sheen
    This weekend I found my PC was possibly infected by some virus or spyware. There is one "svchost.exe -k netsvcs" in my task manager, and it is running under my user name, rather than SYSTEM accounts. There is already another same process with same command line options under SYSTEM account. This user account svchost.exe consistently consumes 50% CPU (1 of 2 cores of my CPU). In Process Explorer, I can see it is started by explorer.exe, instead of services.exe. However, I failed to find its real service dll place in registry or disk. Does anyone know how to find this malicious program?

    Read the article

  • LPR command won't recognize CUPS printer

    - by Datapimp23
    I have a cups server with one shared printer configured on it. It prints test pages without problems. printername (Idle, Accepting Jobs, Shared) Description: desc Location: Driver: Zebra ZPL Label Printer (grayscale, 2-sided printing) Connection: socket://172.20.50.26 Defaults: job-sheets=none, none media=oe_w288h432_4x6in sides=one-sided This is the output from lpstat -t. it shows that the printer is idle and accepting requests admin@SERVER:~$ lpstat -t scheduler is running no system default destination device for printername: socket://172.20.50.26 printername accepting requests since Thu 26 Jan 2012 01:29:35 PM CET printer printername is idle. enabled since Thu 26 Jan 2012 01:29:35 PM CET Now when I want to send a printjob to it via an LPR command it won't recognize the printer /usr/bin/lpr -P printername test.pdf Result lpr: ttn_seg_zebra1: unknown printer What am I missing here ?

    Read the article

  • apache - subdomains are slower?

    - by matthewsteiner
    Using apache benchmark, I ran the exact same php application at my root domain and a subdomain. Even with multiple tests and high request numbers, the requests per second perform extremely differently. I mean something like this: example.com - 1200 requests per second bench.example.com - 50 requests per second What could be affecting this? These aren't using databases or anything, just mainly getting a simple page displaying. But it's the same app for both of them, and I'm wondering why they perform so differently. Ideas?

    Read the article

  • Difficulty racking Proliant G8's

    - by Systemspoet
    We're an all Proliant shop with around 50 servers, mostly DL360s and DL380, from G5's through G7's. We just got our first two G8's in and went to rack them. We were stunned to find out that the new cable management arms protrude almost 1 inch deeper into the rack then previous iterations of the Proliant line. Unfortunately that causes them to occupy the same space as the PDU's in our APC racks. In a non-densely populated section of rack that's no biggie, but in a densely populated section it's impossible to get the cable arm into place without dislodging another machine's power. Has anyone else run into this? Obviously racking machines without cable management arms is not an option. I supposed we could reconfigure our racks but that's a nightmare.

    Read the article

  • Roaming Profiles: Best Practices

    - by Noah Clark
    I want to setup roaming profiles for about 50 users. What is the best way to go about doing this? What are the best practices. I've read about desktops/my Documents being TOO big. How big is too big? We have a few users who keep a lot of media on their machine to listen to throughout the day. I would imagine they have a few gigs of MP3's in their My Documents folder. How do you deal with this? Thanks!

    Read the article

  • How to clean this Dell Precision M6400

    - by Daniel Pratt
    I have (well, ok, my employer has and I use) a Dell Precision M6400 notebook. It's a decent piece of hardware, but I have at least one major gripe: It's a dust and...uh...crumb (I repent! I repent!) magnet! And I cannot seem to exorcise the dust/crumbs from it! There is a strip of metal above the keyboard that is punched full of tiny holes. Well, maybe it's better to describe them as 'pits'. If a sufficiently small particle finds its way into one of those pits, there is only about a 50% that I will manage to get it out. Consequently, there is now a chorus of tiny little particles silently chiding me about eating cookies a cracker whilst I browse the intarwebs. Does anyone have any suggestions about how I could remove these particles from this machine...while still preserving the function of the machine?

    Read the article

  • VMware + SQL Server - sqlserver.exe not using both CPU cores

    - by fistameeny
    Hi, I am working on a virtual machine that runs SQL Server Express (as part of Sage Line 50 Manufacturing). The details are as follows: Physical Server (host machine) - Intel Xeon Quad Core 2.1GHz - 4GB RAM - VMDK image stored on RAID-5 500GB SATA drives (7200RPM) - Running Ubuntu 10.04 Server 64 bit - VMware Server 2 Virtual Machine - Windows Small Business Server 2003 - Allocated 2 vCPU's and 2GB RAM - Using 100GB pre-allocated flat VMDK file The problem I have is that there is process that runs in SQL Server that is CPU intensive. On the old physical server that we migrated to the virtual machine from, this would utilise both CPU cores so the sqlserver.exe process would be running 100% on each of the CPU cores. On the virtual machine, it only seems to use one of the two CPU cores, meaning that the process is much slower to run. Question Is there a way to force SQL Server (sqlserver.exe process) to use both of the CPU cores, and distribute it's load between them? Is this a VMware setting that needs changing to allow processes to use both cores?

    Read the article

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • Text template or tool for documentation of computer configurations

    - by mjustin
    I regularly write and update technical documentation which will be used to set up a new virtual machine, or to have a lookup for system dependencies in networks with around 20-50 (server-side) computers. At the moment I use OpenOffice Writer with text tables, and create one document per intranet domain. To improve this documentation, I would like to collect some examples to identify areas where my documents can be improved, regarding general structure and content, to make it easy to read and use not only for me but also for technical staff, helpdesk etc. Are there simple text templates (for example for OpenOffice Writer) or tools (maybe database-driven) for structured documentation of a computer configuration? Such a template / tool should provide required and optional configuration sections, like 'operating system', 'installed services', 'mapped network drives', 'scheduled tasks', 'remote servers', 'logon user account', 'firewall settings', 'hard disk size' ... It is not so much low-level hardware docs but more infrastructure / integration information in these documents (no BIOS settings, MAC addresses).

    Read the article

  • shut down FTP from IIS 6 after <X> failed login attempts

    - by Justin C
    Is there a setting in IIS 6 to turn an FTP site off after a specified number of failed login attempts? It has already been documented on this site that a Windows server sitting on a static IP address can record tens of thousands of failed login attempts a month. One server I maintain has had tens of thousands of attempts made against the FTP port. I have solid passwords in place, so I am not overly concerned. I rarely have to use the FTP, so for the most part I turn it on and off as I need it. Sometimes though I forget to turn it off when I am done, only to find the next day that my EventLog is full of audit failures. I would want to set a high number, in case I just messed up the password. Something like if 50 failed login attempts happen, just turn off the FTP site. Then if I need it later I can just start it again.

    Read the article

  • How to check if redis master is OK?

    - by e-satis
    On the documentation, they advice the monitor command. But it has a 50% performance penalty for the whole system, and how should I do that ? Whatching the ouput using SSH until I don't see anything ? Let's say I have 3 servers: 1 with a redis master, 1 with a redis slave, and one with my website querying the redis master. How can I, from my website server, make cleany the decision to fallback to the slave by sending the SLAVEOF NO ONE command ? My first step would be to put some kind of timeout check with a simple ping, just to be sure the server is online. But for redis specifically, I have no clue.

    Read the article

  • Windows XP failing to set theme correctly on auto-login

    - by Alois Mahdal
    On several testing machines we have, when automatic login (as Administrator) is activated, Windows fails to set theme (Display properties - Themes) correctly. Particularly, even if theme is set to "Windows Classic", visually it's obvious that "Windows XP" is applied (the one with blue title bars and red "X" butons). I have only seen this happen when Auto-login is set--we always use Administrator on XP. Even if I log out and back in manually, theme is set correctly. Apart from logging out and in, it's also possible to reset theme in "Display properties". It does not happen in 100% of the cases, but it's way over 50%. Definitely it's often enough to be annoying. I believe this is a bug in Windows XP. I have never encountered it on other Windows versions. Does anybody know how to avoid this issue once forever? (Or can anybody provide explanation, relevant links, etc.?)

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >