Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 769/1027 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • Suspected Router problems, [closed]

    - by jordon_user
    I am new to this forum. I appreciate any help you can offer. I have been troubleshooting an error with both my WIRED Internet Connection I have 3 desktops running through this working connection (Router) with no problems. Two computers, one laptop and desktop experience the issue. If I connect directly to the Modem, on either, no problem on the internet. Through the Router, my computer connects for a period of time but if ever awaking my comp, or restarting, the yellow explanation mark returns. The only way I can get the connection up = cmd. ipconfig /release, ipconfig /renew. This constant command is annoying and was hoping for a permanent fix. Audio is also not working. I know its likely a driver error, but its an old comp we put together and if there's an easy fix itd be great to hear from you! If you need any more info just let me know. Thanks! THanks guys!

    Read the article

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Regarding traffic shaping on juniper SRX550

    - by peilin
    We have implemented the Juniper SRX550 in our company. Now we have one issue that how to restrict the internal user download speed from internet. Take one example that i want to restrict the end user with IP:192.168.1.20/32 downloading speed up to 1M via my external port ge-0/0/6.0. Below is my setting: [edit firewall policer p1M] root@SRX550# show if-exceeding { bandwidth-limit 1m; burst-size-limit 15k; } then discard; [edit firewall family inet] root@SRX550# show filter limit-user term 10 { from { destination-address { 192.168.1.20/32; } } then policer p1M; } term else { then accept; } [edit interfaces ge-0/0/6] root@SRX550# show per-unit-scheduler; unit 0 { family inet { filter { input limit-user; } address Hidden Here; } } As per the setting, the end user downloading speed should not exceed the 1m (125KB in windows), but the result is the downloading speed for this end users still can up to 400KB via HTTP/HTTPS. Please advise. Thanks.

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

  • Why can't I see installed AIR applications in /Applications on Mac OS Lion?

    - by Damir Zekic
    Yesterday I did a clean install of Mac OS Lion, and installed bunch of apps, including HipChat Desktop Application (an Adobe AIR app). I used it for some time and eventually closed it. Today I wanted to start it again, but couldn't find it in Launchpad. I looked in /Applications folder, but I couldn't find it there either (I still don't have ~/Applications). I went back to HipChat website (from where I installed) and a Flash plugin allowed me to run the app immediately. It shows up in my Dock and if I "keep it" there, I can launch it again (it still doesn't appear in my /Applications dir). However, it asks me for confirmation about launching an app that has been downloaded from the Internet (every time I start it, it's annoying). I also see the app in Adobe AIR uninstaller. I then went to my old Snow Leopard installation and found HipChat in my /Applications folder there. Copied it to the Lion disk and it works. Now I have two HipChat entries in Adobe AIR uninstaller. I guess I solved the issue at hand, but I still don't understand where the original app is located and how I can access it and move it to Applications. I couldn't find it anywhere with both Finder search nor find command in terminal (I used $find / -name "HipChat*" -print). So, how does Adobe AIR store the installed applications (on OSX Lion) and is there any way to get them to show up in Launchpad?

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • Huge host CPU usage in idle vmware guest. Ubuntu 10.04 host, Vista SP2 guest

    - by themesandmodules
    I'm experiencing huge host CPU usage with an idle vmware guest. Host: Ubuntu 10.04 32-bit 2.6.32-24-generic-pae. (Very new install, i.e 24 hours ago) Hardware is Dell XPS M1530 laptop, 4GB ram. Intel Core II Duo T9300 2.50Ghz The virtualization setting "VT" or something is enabled in my bios. Guest: Completely fresh install of Windows Vista, upgraded to latest SP2 and all windows updates installed. 1024 - 1512MB ram allocated. Absolutely no other software installed on it, apart from VMWare tools. Situation When the guest is doing absolutely nothing, I watch with sysinternals process watch on the guest. This shows that system idle process is between 70 and 99%, usually around 95%. No actual process doing anything. On the host, I watch with top, I get cpu usage of 20% - 80%, usually around 30%. What I have tried Single and Dual processor available to guest - no change. Turn off all peripherals to guest - no network, drives, usb etc - no change. Turn off 3d acceleration for guest - perhaps a small improvement, or no change. Upping allocated ram to guest from 1024MB to 1512MB - no change. Yelling at vmware - no change. I have experienced a similar issue in the past, which was solved by setting the guest to have 1 CPU. This time that hasn't worked.

    Read the article

  • Ways to go about optimizing website performance WordPress, Amazon EC2 Apache and RDS MySQL

    - by fuzzybee
    I have 6 WordPress websites running on 1 single EC2 instance. All the the websites are connecting to databases in 1 same RDS instance. Earlier today, traffic to the largest website peaked and the RDS instance went bottle-neck - CPU utilization was 100% for over an hour. It affected all of my websites as it took them all forever to load. In order to prevent such issue from happening again, which of the following will matter most so that I invest time and effort in first of all? (I will work on all later, I just need to prioritise now) To improve caching for all websites To fine-tune the database server To fine-tune my Apache server What will be the effect on user experience for my websites? Some quick searches show that I should limit number of concurrent connections to my web server but wouldn't that prevent users from accessing my websites? More background: My largest website has 140k visits and 660k page views a month. The other 5 websites should add up much less than that. I'm using a large EC2 instance as the web server I'm using a medium RDS instance as the database server What I've already done: Use W3 Total Cache plugin for caching for most the websites, especially the largest one (I can barely anything else in terms of caching I could do for the largest website) Am I using my resources wastefully or is there simply not enough resources for my websites - or rather, how do I answer that question myself?

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • Midnight Commander Woes: Output while panels are active, and tab completion.

    - by Eddie Parker
    I'm trying out midnight commander (loved Norton back in the day!) and I'm finding two things hard to work out. I'm curious if there's ways around this or not however. 1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever. i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn. Is there anyway to see the output? I've tried 'ctrl-o', but it appears to just give me a fresh sub-shell and wipes the previous output away. Pausing after every invocation is a bit irritating, so I'd rather not use that option. 2) Tab completion for commands When mc is running, it consumes the tab character for switching panels. Is there any way to get around this so I can still type in paths and what not on the command line? I'm running cygwin if that matters at all.

    Read the article

  • Brick-level backup and restore with exchange 2007

    - by V. Romanov
    In the company I'm working for, we use exchange 2007 and backup it using netbackup. The backup is a daily complete backup of the information store and the direct corollary of this is that restores are hell. We need to restore the entire information store (over 80 gb), somehow merge it back with the original store, which causes problems. Alternatively, we tried using QUEST software to emulate exchange and restore mails from the emulation. However, this proved unreliable. The main problem with this entire situation is that we have to restore the whole information store and walk it through the restore process manually, and its quite absurd to be forced spend more than a day restoring even one erased email. (we have erased mail retention, but sometimes we need to restore older mail). in comparison, back in the day of XCH2003 and backupexec 12, we had complete brick level backup and restore at the push of a button. I've spoken to one of our chief sysadmins who claimed that the official response from microsoft to this issue was - "sorry guys, no brick level backup in XCH2007" which sounds ridiculous to me. Can someone shed some light on the situation? How do you backup your exchange2007 stores? Can you restore a single email quickly? A mailbox, perhaps?

    Read the article

  • Unknown user in terminal

    - by Giles B
    Im having a strange problem with the terminal in OS X. When I open the terminal the username at the command prompt is: unknown-04-0c-ce-e3-0d-c2: ~ I can't pinpoint when this first started or why unfortunately. I usually use iTerm for web development purposes but this also occurs in the normal OS X Terminal app. Any ideas/help would be really appreciated. Thanks Update: Thanks to @fayadfami and @aliasgar for the correct answers and steering me in the right direction. Also this forum post helped http://forums.macrumors.com/showthread.php?t=152407 The extract from the right post: Having run into the exact same issue myself, and having come across this thread while attempting to figure it out, I thought I'd post the answer. OS X is initially setting your hostname to what's set for your Computer Name in Sharing; however, if you're set up for DHCP and you match a current lease on your DHCP server (i.e., match the IP address of another recent user), OS X will then set your hostname to whatever the DHCP server currently has for that lease. This freaked me out incredibly at first, as I had just reformatted (having just purchased my first Mac and wanting to see how the installer worked) and knew I had not yet changed the Computer Name in Sharing -- yet my system hostname at the Terminal prompt was indeed changed to what I had previously set, pre-format. I grepped around, not finding the name anywhere save log entries; I thought either the format didn't actually properly wipe everything, or I was losing my mind. Finally I logged into my router (it's a Linksys WRT54GS running OpenWRT), and found the hostname in the current leases file. I then manually set my Mac's IP to something different, and volia! -- the hostname was back to what I expected. I hope this helps save someone from the same paranoia I went through.

    Read the article

  • Connection reset by peer: mod_fcgid: error reading data from FastCGI server Issues

    - by user145857
    Help is greatly needed for our server. We are experiencing random "Connection reset by peer: mod_fcgid: error reading data from FastCGI server" errors which cause a 500 internal server error. If the page is then reloaded it loads normally as it should. We are running MPM Worker with mod FCGID to handle PHP. We had APC cache enabled but disabled it recently to see if it would fix the problem, but the random mod FCGID errors are still continuing. No other opcode cache is active now. Our settings are below: <IfModule worker.c> MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 ThreadLimit 100 ServerLimit 700 MaxClients 700 MaxRequestsPerChild 0 </IfModule> <IfModule mod_fcgid.c> FcgidMaxRequestLen 1073741824 FcgidMaxRequestsPerProcess 2000 FcgidMaxProcessesPerClass 100 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 300 FcgidIOTimeout 900 FcgidFixPathinfo 1 FcgidIdleTimeout 300 FcgidIdleScanInterval 120 FcgidBusyTimeout 300 FcgidBusyScanInterval 120 FcgidErrorScanInterval 12 FcgidZombieScanInterval 12 FcgidProcessLifeTime 3600 </IfModule> The server is a 64 core 2.1 GHZ 94 GB RAM so it has some power. Some of the fcgid timeout settings are higher because we run large reports which take up to 15 minutes. Any help is greatly appreciated! Just to clarify, the random fcgid errors are occurring when a user clicks a page on our site and the 500 error page loads instantly. This is random and occurrs less than 1% of the time but it is still an issue.

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • Internet doesn't work when enable local network

    - by rakesh yadav
    We have the following network setup: A) Router IP 192.168.51.49 B) Windows Server 2008 R2 with dual NIC: Lan A) WAN interface (192.168.51.50) ( Used for internet) Lan B) LAN interface (192.168.30.228) ( used for local connectivity ) When I keep both LAN Enabled than my internet doesn't work, but if I disable my local LAN then internet works fine. How can I resolve this issue? Do I need to do routing on my server Please find the below attached route print result C:\Users\Administrator>route print =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.51.49 192.168.51.50 276 0.0.0.0 0.0.0.0 192.168.30.227 192.168.30.228 266 192.168.30.224 255.255.255.240 On-link 192.168.30.228 266 192.168.30.228 255.255.255.255 On-link 192.168.30.228 266 192.168.30.239 255.255.255.255 On-link 192.168.30.228 266 192.168.51.48 255.255.255.240 On-link 192.168.51.50 276 192.168.51.50 255.255.255.255 On-link 192.168.51.50 276 192.168.51.63 255.255.255.255 On-link 192.168.51.50 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 202.56.230.5 255.255.255.255 192.168.51.49 192.168.51.50 21 202.56.230.6 255.255.255.255 192.168.51.49 192.168.51.50 21 192.168.26.124 255.255.255.255 192.168.51.49 192.168.51.50 21 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.51.50 276 224.0.0.0 240.0.0.0 On-link 192.168.30.228 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.51.50 276 255.255.255.255 255.255.255.255 On-link 192.168.30.228 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.30.227 Default 0.0.0.0 0.0.0.0 192.168.51.49 Default ===========================================================================

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • Postfix selective header_checks: smtpd_relay_restrictions vs. smtpd_recipient_restrictions

    - by luke
    Some of my customers implemented commercial software that violate email-RFCs such that we have had to relax our header checks. In consequence, we receive more spam. Prolog: I know the domains (customer.com) and IP-addresses (a.b.c.d/C) these emails come from Kind request for help: Is it possible to setup one Postfix (2.11) instance on Linux such that: It applies only some header checks for emails from .*@customer.com But applies all header checks for all other email sources? I thought of a combination of mynetworks that includes the subnet a.b.c.d/C in smtpd_recipient_restrictions -- allowing all these messages through -- and simultaneously avoid an open-relay with smtpd_relay_restrictions. However, this has not worked out as expected. Any idea or help is highly appreciated. Thanks in advance. Luke ==EDIT== For the current issue, I solved the problem by prepending REDIRECTs to header_checks as follows: /^received: from.*customer.com.*by mail.own.com.*for.*luke@own.*/ REDIRECT [email protected] This works so far as neeeded. Irrespective thereof, I am still looking for a postfix configuration that would turn this text-based setting into an IP-Address-Range based forwarding rule.... Thanks. Luke

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >