Search Results

Search found 4060 results on 163 pages for '400 the cat'.

Page 48/163 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

    - by Panayiotis Karabassis
    I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility?

    Read the article

  • Can't open Paypal.com with Google Chrome

    - by grunwald2.0
    Currently I always get an error message (since one week!) trying to open the PayPal website with Google Chrome and I don't know why. FlashBlocker and AdBlockPlus are deactivated. v.20.0.1132.11 dev. Error message: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> **<p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br />** <pre> Cookie: Apache=10.190.8.170.1302997118916547; (cookie body removed due to privacy reasons) </pre> </p> </body></html>

    Read the article

  • Why is Firefox so slow and heavy?

    - by Tony
    For some reason, when I go to links the pages seem slow and heavy. It also has a lot of lag spikes between page loads. Basically it seems to freeze then load it all at once fast. I'm currently using Firefox 25. But when I use the same Chrome version, it seems to be very fast and smooth page loading. The CPU it takes on average is about 400,000k. Extensions: iMacros Leethax Ad Block Plus 2.4 Ad Block Plus Pop-up Addon 0.9.1 Computer stats: 6 GB RAM Windows 7 Acer Aspire Laptop 500 GB HDD Intel Core i4-2370M How do I make Firefox load like Google Chrome, without much freezing?

    Read the article

  • nginx connection reset

    - by Steve
    When first visiting my site after not visiting it for a few minutes, the connection is "reset" 100% of the time. I get this message when debug is turned on, along with a 400 bad request status message: client prematurely closed connection while reading client request line I've read that this could be caused by large_client_header_buffers setting. I have google analytics on my site. Using live http headers, I get this as the request: `GET /__utm.gif?utmwv=5.3.7&utms=35&utmn=745612186&utmhn=domain.com&utmcs=UTF-8&utmsr=1920x1080&utmvp=1841x903&utmsc=24-bit&utmul=en-us&utmje=1&utmfl=11.4%20r402&utmdt=2006Scape%20Forums%20-%20General&utmhid=2004697163&utmr=0&utmp=%2Fservices%2Fforums%2Fboard.ws%3F3%2C4&utmac=UA-25674897-2&utmcc=__utma%3D68455186.1647889527.1351640625.1352446442.1352451659.100%3B%2B__utmz%3D68455186.1352097329.64.2.utmcsr%3Ddomain.com%7Cutmccn%3D(referral)%7Cutmcmd%3Dreferral%7Cutmcct%3D%2Fservices%2Fforums%2Fboard.ws%3B&utmu=q~ HTTP/1.1 my large_client_header_buffers in nginx is set to 4 8k, so I don't know if this is the problem. Immediate requests have the first "reset" request are all successful.

    Read the article

  • User account automatically filling up with dead.letter file

    - by jeroen
    I have one user account on a server with about 400 accounts that is filling up automatically. The dead.letter file in the users home directory automatically grows until the account is full (about 10 - 40 Mb per day). The user is using Microsoft Outlook to send and receive mail. What can be causing this and how can I avoid it from happening? Right now I have an emergency cron-job to delete the file but I would like "real" solution. Edit: The server version is Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Edit 2: It seems mainly spam and I see different mailer headings (from php to Outlook Express) and a frequent appearing header is [email protected] Update: I have asked the hosting provider where I use that dedicated server to look into the problem as well, as it's their Control Panel that could be a cause of the problem.

    Read the article

  • sudo suddenly stopped working on debian

    - by chovy
    I've been using 'sudo ' since I setup my server about a week ago. It suddently stopped working with no explanation. I am in 'sudo' group. So there should be no config change required to /etc/sudoers $ sudo apt-get install tsocks [sudo] password for me: me is not in the sudoers file. root@host:/etc# groups me me : me sudo The only thing it could possibly be related to was I added the following line to sshd_config: PermitRootLogin without-password But I have since changed that back to PermitRootLogin yes Permission on file is 400: ls -l /etc/sudoers -r--r----- 1 root root 491 Sep 28 21:52 /etc/sudoers No idea why it stopped working, or how to fix it.

    Read the article

  • Extracting httpdocs from Plesk Panel 9.5.4 Webserver backup file

    - by Paddington
    Good day, I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here: zcat DUMP_FILE.gz DUMP_FILE My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759. But when I try # cat CP_1204131759 | munpack I get an error that munpack did not find anything to read from standard input. I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones: CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?

    Read the article

  • Finding Locked Out Users

    - by Bart Silverstrim
    Active Directory up to 2008 network (our servers are a mix of 2008, 2003...) I'm looking for a quick way to query AD to find out what users are locked out, preferably from a batch or script file, to monitor for possible issues with either user accounts being attacked by an automated attack or just anomalies in the network. I've Googled and my Google-fu has failed; I found a query off Microsoft's own knowledgebase that cites a string to use on Server 2003 with the management snap-in's saved queries (http://support.microsoft.com/kb/555131) but when I entered it, the query returned 400 users that a spot-check showed did NOT have a checkmark in the "Account is locked out" box under "account." In fact, I don't see anything wrong with their accounts. Is there a simple utility (wisesoft bulkadusers apparently uses this method behind the scenes, since it's results were also wrong) that will give a count of users and possibly their user object names? Script? Something?

    Read the article

  • slow disk writes between host and guest

    - by Jure1873
    I've got a ubuntu (server kernel) on a amd x4, 4gb ram, 2x seagate sata 1 tb disks for testing virtual machines and the write performance is very slow. The two disks are in a software raid1 array, one small boot ext3 partition, 10gb system partition and the rest is a xfs partition (about 980) gb for data (virtual machines). If I'm copying files from the virtual machine to the host with rsync or scp the copy frequently stalls or goes at about 1mb/s. What's wrong? I've tried disabling barriers on xfs, increased logbufs, allocsize, but it seems nothing helps. The strange thing is that await (for example during copying) for sda is usually under 100, while for sdb is around 400. Any ideas on what could be wrong / what could I do to improve this setup?

    Read the article

  • Locked out by changing file permissions

    - by Valeriy
    I just locked my root account (and all other accounts if it matters) completely out of the RHEL 5.4 by changing permissions on every file to 400. Now I have "Permission denied" on any command that I try to run, including chmod itself. Any idea on how to recover? The only access I have to the server is via terminal or SSH. (If anyone cares how it happened, I was running a hardening script and one of the lines was supposed to change permission on some config files in /etc directory. It has couple of variables that had not been set, so the command essentially evaluated to chmod -R 0400 /* Ouch! This is sure a great lesson on checking the scripts even more carefully in the future but what can I do now?

    Read the article

  • Routing between two subnets. (Need Solution)

    - by rehanplus
    Need help according to scenario given: Client end PCs: 400 + Network : Server 1 (Linux) : 192.168.2.0/24 (For Application, Internet not working) GW: 192.168.2.1 Clients: 192.168.2.1 - 254 Server 2 (Linux) : 192.168.3.0/24 (For Internet users) GW: 192.168.3.1 Clients: 192.168.3.2 - 254 Server 2 is connected to DSL Broadband. Server 1 and Server 2 both are on same physical network i.e. Same switches. Current issue: i have to deploy a file and print server but this server will be accessed by both (192.168.2.x and 192.168.3.x) one same workgroup. as both subnets are on same switched network. Limitations: Currently there is no hardware routers and firewall. Need to complete this task with Linux / Windows / AD. Tested / Worked so far: Configured one PC with two NIC's With the IPS: NIC 1 : 192.168.3.2 GW: 192.168.3.1 Subnet: 255.255.255.0 NIC 2 : 192.168.2.2 GW: Empty Subnet: 255.255.255.0 Kindly provide any solution what should i do to get sharing enable on both Subnets. Thank you All

    Read the article

  • Cannot copy anything onto WD Elements 1TB External USB HDD

    - by Aashish Vaghela
    I have a Western Digital 1023 Elements 1TB External USB HDD. Recently, it has started an unusual problem. I cannot copy any file of any size on to that 1TB hard-drive, eventhough it has more than 400 GB free (out of 931GB actual size). I tried copying movies from one friends laptop, which did not work. I also tried another desktop to copy some study material e-books (in PDF), which also did not work. I get same CRC error when I try to copy anything from a computer's hard-drive onto this WD 1TB hard-drive. Vice-versa it's working. I mean, I can copy any file from the USB HDD onto local machine's HDD on any computer. It's like one-way traffic. This HDD is only 1 year old. What are my options ? Any suggestions ? Regards, Aashish.V

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • In BASH, are wildcard expansions guaranteed to be in order?

    - by ArtB
    Is the expansion of a wildcard in BASH guaranteed to be in alphabetical order? I forced to split a large file into [10Mb pieces][1] so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor In its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka '*' ) would always be in alphabetical order so that .aa came before .ab ( as opposed to be timestamp ordering or something like that ). Also, are there any flaws in my plan? How great is the performance cost of cating the file together?

    Read the article

  • disappearing emails

    - by Mike
    I have a few users (about 7 of 400) where their emails are disappearing intermittently on outlook, the emails are however still on OWA in the correct folders. After running the following Shell command in exchange and restarting outlook everything works fine again for about a week. [PS] C:\Windows\system32set-mailboxcalendarsettings [email protected] - AutomateProcessing:autoupdate Then for some odd reason AutoUpdate changes to AutoAccept and the problem starts again. They are all using office 2007 with SP3 but I suspect the problem is on exchange and not on the local machine. any help will be appreciated FrustratedMike

    Read the article

  • WebsitePanel 2 totally NOT working on Windows Server 2012 on Azure

    - by Carmine Giangregorio
    I’m having many troubles installing WebSitePanel on an Azure Virtual Machine, with Windows Server 2012. I followed the steps in http://www.websitepanel.net/documentation/deployment-guide/server-configuration/preparing-windows-server-2008-r2-for-websitepanel-installation/ and installed everything I needed. Then, I installed the WebSitePanel Standalone Server package with the installer. I opened the endpoint for the port 9002 on Windows Azure; so I pointed my browser to myhostname.cloudapp.net (note: in Azure you don’t have a static IP address, instead you have an hostname like [hostname].cloudapp.net). So, loading myhostname.cloudapp.net:9002 fails, and any browser shows something like “Unable to load page”. Notice: if I try to load the WebSitePanel Portal directly on the server, I get an error HTTP 400 Bad Request. How come? IIS works perfectly on the server, in fact the default website runs without problems on port 80.

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • How to disable ^C from being echoed on Linux on Ctrl-C

    - by pts
    When I press Ctrl-C in any pseudoterminal (xterm, gnome-terminal, rxvt, text console and SSH) in Karmic Koala, the string ^C gets echoed to the terminal in Ubuntu Karmic Koala. This hasn't happened in Ubuntu Jaunty Jackalope. I'd like to get rid of the extra ^C. Example: $ cat foo foo ^C $ _ I got the above by typing C, A, T, Enter, F, O, O, Enter, Ctrl-C. I want to get rid of the ^C, and get this for the same keypresses: $ cat foo foo $ _ I tried setting stty -echoctl, which created a single-character HT (or a box with Unicode 0003 in it) instead of the ^C. I want to see absolutely nothing when I press Ctrl-C. I'm using Linux linux 2.6.31-20-generic-pae #57-Ubuntu SMP Mon Feb 8 10:23:59 UTC 2010 i686 GNU/Linux

    Read the article

  • Move /var directories to to /mnt on an EC2 instance

    - by Geoff Lanotte
    I am trying to work on a standard configuration for a set of EC2 instances running ubuntu 12.04. These servers are going to be primarily web servers for a Ruby on Rails application. When you configure a new large instance, you are given a primary of 8GB and then ephemeral storage of 400 GB that is mounted to /mnt. It seems logical to me to move some directories that have a potential for growth off to the /mnt directory, I was specifically thinking of /var/www and /var/log. My question is two-fold: Is this a good idea or are there pitfalls that I cannot see? If this is a good idea, how should I go about configuring this. I do have the ability to configure new instances and down our old instances. My concern is over long term, doing this in such a way that it prevents downtime. I am a developer with some experience in devops, but mounting drives is something I have not faced before, so explicit directions would be greatly appreciated.

    Read the article

  • Install multiport module on iptables

    - by tarteauxfraises
    I'am trying to install "fail2ban" on Cubidebian, a Debian port for Cubieboard (A raspberry like board). The following rule failed due to "-m multiport --dports ssh" options (It works, when i run manually the command without multiple options). $ iptables -I INPUT -p tcp -m multiport --dports ssh -j fail2ban-ssh" iptables: No chain/target/match by that name. When i make a cat on "/proc/net/ip_tables_matches", i see that multiport module is not loaded: $ cat /proc/net/ip_tables_matches u32 time string statistic state owner pkttype mac limit helper connmark mark ah icmp socket socket quota2 policy length iprange ttl hashlimit ecn udplite udp tcp What can i do to compile or to enable the multiport module? Thanks in advance for your help

    Read the article

  • Mail being sent as root on Ubuntu 14.04

    - by Benjamin Allison
    I'm really struggling with this. I'm trying to set up this server to send mail using Gmail's SMTP. Google keeps bouncing the messages, saying that that Authentication is required: smtp.gmail.com[74.125.196.109]:25: 530-5.5.1 Authentication Required. Learn more at smtp.gmail.com[74.125.196.109]:25: 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 But it seems my server is trying to send mail as [email protected]. I'm baffled. Here's what I've done so far: Updated mail.cf relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_use_tls = yes Created /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:password Then did the following: sudo chmod 400 /etc/postfix/sasl_passwd sudo postmap /etc/postfix/sasl_passwd cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart I can't for the life me get a mail message to send, or change the default mail user from [email protected] to [email protected] (FWIW, I'm using Google Apps, that's why it's not a .gmail address).

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • Under *nix, how can I find a string within a file within a directory ?

    - by roberto
    Hi all. I'm using ubuntu linux, and I use bash from with a terminal emulator every day for many tasks. I would like to know how to find a string or a substring within a file that is within a particular directory. If I was knew the file which contained my target substring, I would just cat the file and pipe it through grep, thus: cat file | grep mysubstring But in this case, the pesky substring could be anywhere within a known directory. How do I hunt down my substring ?

    Read the article

  • What is the best way to configure the number of workers in Apache?

    - by rbm
    My site receives a lot of traffic for 2 hours during the day (2000 hits per minute). The rest of the day receives less traffic(500e hits per minute). I have been experimenting with the MaxClients and MaxSpareServers values but I still get some downtime during peek hours. How can I calculate the best values for my configuration based on the amount of ram that I have ? Each process is like 36-40 M of Memory total used free shared buffers cached Mem: 3096 793 2302 0 0 0 -/+ buffers/cache: 793 2302 Swap: 0 0 0 Values that I am using now <IfModule prefork.c> StartServers 10 MinSpareServers 22 MaxSpareServers 60 ServerLimit 90 MaxClients 90 MaxRequestsPerChild 400 </IfModule>

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >