Search Results

Search found 11311 results on 453 pages for 'opn day viritual'.

Page 413/453 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Nameserver not resolving or domain not pingable [closed]

    - by Ricky
    Sorry, if anyone can think of a better title please change it! I want to host my own websites from home. For testing purposes, I have a virtual machine running a trial version of Windows Server 2008 Enterprise. Note I currently run a VPS and host my own websites but due to a nice speed upgrade on our line I now want to host from home. I have several domains but I wanted to test with one, that is rickyoleary.com. Our ISP does not provide static IP addresses unless we have a business account so I've been looking at no-ip.com. I admit my networking isn't the best, hence this question but I've been bashing my head all day on this one. I created a host name, muffinbubble.no-ip.org which runs on IP: 86.148.124.15. I've setup IIS on the server with a simple test page. I've then forwarded port 80 traffic from the router and from what I can see, it's working. If I access my website (I was unable to link to this for some reason so please copy and paste this) - http://86.148.124.15/ - I see my test page. So the next step was to create my nameservers. This domain is with namecheap.com so I created my nameservers, ns1.rickyoleary.com and ns2.rickyoleary.com. Both these point to the same IP (and yes, that will be changed after testing), the same IP as above: 86.148.124.15. On the server itself I have set up DNS entries as below which I believe to be correct and added rickyoleary.com and www.rickyoleary.com in the host headers (or bindings) in IIS 7.0. If I try and look up my domain, rickyoleary.com it shows ns1.rickyoleary.com and ns2.rickyoleary.com as the nameservers. I then tried to use just-ping.com on my nameserver ns1.rickyoleary.com. I get 100% packets lost, but the correct IP address is returned (I'm guessing the router does not allow pings, but is still accessible...). I get no response when pinging rickyoleary.com. Here's the problems: I cannot ping ns1.rickyoleary.com or ns2.rickyoleary.com from a command prompt. I'm not sure if this is an issue. When I added the nameservers in Windows Server 2008 and clicked 'resolve' a message box displays stating "No such host is known". I cannot ping rickyoleary.com. rickyoleary.com is not showing my test page on my server. Now - please note, I've waited around 6 hours for propagation. From my experience, although you're told to wait 24 - 48 hours, the changes are normally pretty quick so perhaps I'm being impatient or naive to think it should all be working fine until then. I would really appreciate some help here. Thanks.

    Read the article

  • iPhone 3.1 Black Screen

    - by churnd
    Since I've updated my 3G iPhone to 3.1, it will become unresponsive after ~4-5 hours. It looks like it's off, but it's not. None of the buttons do anything that I can see. The only way I can get a screen back is to do a hard reset (Power + Home). Has anyone else had this problem? What can I do to fix it? I've had it 3 happen 3 times already. Very annoying. Apparently, the Discussions forum on Apple.com also shows lots of people are having the same problem. One guy said he did a restore and that fixed it for him, which I haven't tried yet because many others also tried that and it didn't help. A few others have said turning off WIFI fixes it for them, so I'm going to try that today. Another thing I've noticed is that now, when I plug in my iPhone to my Macbook Pro, iTunes 9 does not launch. It used to before the update. ** Further update and possible solution ** I left my iPhone plugged into my Macbook Pro yesterday while at work, and didn't use it much throughout the day. I noticed it didn't lock up once. I suspect this was because it was connected to a power source. I continued to use my iPhone with no problem yesterday and today. Still no lockups as of now. I personally think this was Spotlight and/or Genius "doing it's thing" from a new install. Kinda like an OS upgrade on your Mac... Spotlight has to rebuild itself and those first couple of hours can be kinda sluggish. This would be even more so on the iPhone where processing power is limited, and I'm sure the processor is clocked down a bit while on battery power, which would further amplify the problem. Again, just my guess. My iPhone is working fine now. ** Final Solution ** Update to 3.1.2. Problem solved.

    Read the article

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • The Cindy Shearin Group: New Scam Targets Renters in the Area

    - by user226089
    MONROE - Craigslist is a popular site when trying to find that perfect deal on a rental home or apartment. Experts warn some of these rental ads aren't what they seem. We decided to take a look. On our Craigslist search we found this house for rent. The problem is this home’s not for rent - it's for sale. “I think it’s a huge deal,” said Shane Wooten, the realtor for this home in Monroe. His properties have become the target of a common scam, aimed at taking your money. "It looks like they're trying to scam them out of their deposit and first months rent," adds Wooten. He says scammers copy and paste the sale ad's from legitimate realtor sites to Craigslist as rental ads. "I can usually tell when one hits craigslist because I’ll usually get 20 to 30 phone calls that day." They then pretend to be out of town on business or personal matters, and give only an email address as a point of contact. Usually they'll ask for money up front on a deal too good to miss. "You'll have a house that's supposed to rent for $950-1000 a month, and they'll have it renting for $600 a month,” says Wooten During our conversation, he shows us text messages from one scammer who says he'll mail the keys to this house if Wooten wires money for a deposit and first months rent. Jo Ann Deal of the Better Business Bureau says scammers are getting better at making themselves out to be realtors. "We’re really concerned for our real estate agents with this scam," says Deal. She says that realtors have to be more on top of their vacant homes in order to protect their businesses. So how can you tell if the house you want is really for rent? She says if the home owner lives out of the country, can't meet face to face or asks for a payment through a money wire it's probably a scam. “There are some catch-lines you watch for,” says Deal. “If the marketing is really good but there's no phone number, no physical address and they will communicate with you only by email and you can do it today, then it's probably a scam." You should always report fishy ad's to Craigslist or the BBB and never send money through a wire transfer.

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Weird random application hang problem

    - by haridsv
    I am trying to understand an application hang problem that started up lately on my windows xp system. The system runs fine for days (sometimes) without ever shutting down or putting it to sleep, but the problem first shows up as one of the apps hanging. The application's UI stops responding or one or more background threads hang, so even though the GUI is responding, it is not doing anything (e.g., in VirtualDub's case, the UI responds fine, but the job doesn't progress and I won't even be able to abort it). The weirdness part comes from the fact that if I try to kill such an app, the program that is used to kill it goes into the same mode (i.e, that hangs instead of the original). E.g., if I use Process Explorer to kill it, the original program exits, but procexp now hangs. If I use another instance of procexp to kill the one that is hanging, this repeats, so there is always at least one program hanging in that state. This is not specific to procexp, I tried the native task manager and even the "End Process" dialog from windows explorer that shows up when you try to close a non-responsive GUI (in this case, the explorer itself hangs). The only program that didn't hang after the kill, is the command line taskkill. However, in this case, explorer hangs instead of taskkill. Also, once this problem starts manifesting, it soon ends up freezing the whole system to the extent that even a clean shutdown is not possible, so I have learned to reboot as soon as I notice this problem, however this is very inconvenient, as I often have encoding batch jobs going on which can't continue the job after the restart. The longer I leave the system running after seeing this problem, the more applications get into this state. I have tried to do a repair install but that didn't make any difference. I also uninstalled some of the newer installs, but again no difference. I tried to search online, but got inundating results for generic hang and crash related problems. Though I couldn't notice any pattern, it seems as though the problem is more frequent if I have some video encoding going on at that time. I had the system running for days when I only do browsing and internet audio/video chat before I decide to start encoding something and the problem starts to show up. I am not too sure if it is the encoding program that first hangs, though I almost always noticed that too hanging (like the VirtualDub stopping to make progress). I also had to reboot 3 times on one day when I was heavily experimenting with encoding. I would appreciate any help in narrowing down this problem and save me the trouble of reinstalling. I don't especially want to loose my gotd installs.

    Read the article

  • XP OEM licensing when reinstalling Windows XP

    - by mindas
    My wife has managed to buy a Dell laptop she was using at her ex-employer that just went bust. The problem with it is the OS (Windows XP) which takes ages to boot and is generally disproportionally slow to the hardware of the machine. So my aim is to sacrifice a day and reinstall it. The problem I am slightly worried about is the licensing/registration/activation hell. Apart from the sticker (with WinXP license key), the laptop has no other paperwork proving this license is legitimate. I believe this was originally an OEM license. Unfortunately, I don't have the the installation CD. This computer also has MS Office installed (which I would like to retain) but it none of MS Office apps would launch due to some obscure error complaining about lack of free disk space (which computer has plenty of). I have absolutely no clue what kind of license this MS Office was. And because the company has gone into the administration, there is no way of getting this information nor installable media. I believe that by buying the hardware I have also acquired the software which I can use as I see fit. Correct me if I'm wrong. Above said, my question would be: What is the easiest way of reinstalling the XP? By easiest I mean avoiding spending my time to prove Microsoft support I've got the right to use the software (insert your computer says noooo joke here) but still being able to get to fresh virgin activated legal state of the XP. I used to work as a sysadmin many years ago so I am not afraid of any technical difficulties. The same question applies to MS Office. I imagine the process would consist of backing up all the data, pulling some bits from the registry and using that on the fresh install. As for reinstall I'd expect to use some sort of OEM Windows repair CD from Dell, right? Are those freely available? My other box (HP) has such a thing and it can't be used on any other brand. I'm sure somebody had to go through this licensing hell and could share his/her tips. Thanks in advance.

    Read the article

  • My server freezes within a few hours of logging out. Staying logged in keeps the server running

    - by HappyEngineer
    I have an Ubuntu Godaddy server I use to host mail and webapps. It started having problems a couple months ago. It would lock up and stop responding to anything. I couldn't ssh into it, so I'd have godaddy power cycle the server. I have never seen anything that looked suspicious in the var logs (although I'm no expert at reading them). An fsck turned up no problems. Godaddy replaced the ram, but found no hardware problems. I started logging the output from "top" to a log file and found that even that stops running when the server freezes. Now, here is the crazy part: It got so bad that it would actually go down every few hours, but then it stopped going down. I eventually realized I had left an ssh terminal logged into the machine running top. This seemed unlikely to be a reason, but after the server was up with no problems for a full week (remember, it had been going down after just a few hours), I disconnected from the ssh session. Lo and behold, within a few hours the server froze again! I had them power cycle again and then left another ssh session open with top. It has been going without problems for 8 days now. I told others about this and they hardly believe me. I simply can't imagine what is going on. I don't know what else to try other than to just get a new server and reinstall everything. Does anyone have any ideas about what I can look for to determine what the cause is? Is it possible there's some sort of exploit on the server which only runs if everyone is logged out of the system? EDIT: The power management gone haywire sounds plausible, so I've modified the /boot/grub/menu.lst to boot with acpi=off and apm=off. It appears to have prevented kacpid and kacpid_notify from being in the process list, so I assume I did that right. I've disconnected all my sessions from the server. I'll check later tonight to see if it's still up. If it goes down then I'll try the pinging process idea. EDIT: It went down again. It lasted about a day. I've had them reboot, so now I'll try running "nohup ping -i 5 google.com &" and then disconnect. If it goes down again I'll come back. Hopefully someone will have some more ideas.

    Read the article

  • varnish3, mod_geoip with apache2 using mod_rewrite and mod_rpaf

    - by mursalat
    I am maintaining a website with 3 different versions of the site, with 3 different languages, handles with a single system written in php, which takes in environment variables based on the domain name that is being accessed. These are the three sites: myshop.com : english international version myshop.eu : european version of site myshop.ru : russian version of the site when myshop.com is accessed from russia it is to be redirected to myshop.ru, and any country from europe accesses myshop.com, is redirected to myshop.eu, and international visitors stay at myshop.com, although they can go to the country specific site. All these redirections for the country is done using GeoIP apache2 mod in order to determine the country code, which is used in a RewriteCondition to state a RewriteRule, there are some exceptions of IPs that do not do the rewrite for, basically the IPs of the developer's PCs. The site has been doing just fine, until we decided to setup varnish to give the site a boost, it really did give it a great boost, but the country specific rewrites has become buggy. What started to happen is that a russian visitor can go to myshop.com and won't be redirected, until he clicks a random link (perhaps a link not cached by varnish yet) and the user is redirected to their specific country. For that i setup mod_rpaf, and for exceptions to the rewrite rule (for the developer's ip), i used this RewriteCond %{HTTP:X-FORWARDED-FOR} !^43\.43\.43\.43, and i restarted varnish and apache2, it worked for a while, then it messed up again. And whole day i have been doing changes however i have little no clue as to what's going on, sometimes it works, and sometimes it doesn't, and sometimes it half works, etc... As for geoip, i used a php to check the $_SERVER variable, and here is the general idea of the output [HTTP_X_FORWARDED_FOR] => 43.43.43.44 [HTTP_X_VARNISH] => 1705675599 [SERVER_ADDR] => 127.0.0.1 [SERVER_PORT] => 80 [REMOTE_ADDR] => 43.43.43.44 [GEOIP_ADDR] => 43.43.43.44 [GEOIP_CONTINENT_CODE] => EU [GEOIP_COUNTRY_CODE] => FR [GEOIP_COUNTRY_NAME] => France Now, thanks to the "random" redirects, i hardly have a clue as to what is going on, so can you guys please give me some ideas as to what tools to use to debug this? I have tried to see the redirect logs, but they really dont show much, and varnishlog isn't helping much either - although i must admit i am no professional at varnish. I believe the problem is with varnish trying to cache the url, and thus apache redirects are not being done properly, however visits the site first has a redirect, and based on that other users are served the content, depending on from where the user was when the cache was last updated, is it correct? if so, how can i solve the problem? Also, i have the option of using geoip redirects on varnish3 instead of using apache2 to do the redirects, is that what the best practice is? Any suggestion as to debugging this or to fix this would be helpful! thnx!

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • XP OEM licensing when reinstalling Windows XP

    - by mindas
    My wife has managed to buy a Dell laptop she was using at her ex-employer that just went bust. The problem with it is the OS (Windows XP) which takes ages to boot and is generally disproportionally slow to the hardware of the machine. So my aim is to sacrifice a day and reinstall it. The problem I am slightly worried about is the licensing/registration/activation hell. Apart from the sticker (with WinXP license key), the laptop has no other paperwork proving this license is legitimate. I believe this was originally an OEM license. Unfortunately, I don't have the the installation CD. This computer also has MS Office installed (which I would like to retain) but it none of MS Office apps would launch due to some obscure error complaining about lack of free disk space (which computer has plenty of). I have absolutely no clue what kind of license this MS Office was. And because the company has gone into the administration, there is no way of getting this information nor installable media. I believe that by buying the hardware I have also acquired the software which I can use as I see fit. Correct me if I'm wrong. Above said, my question would be: What is the easiest way of reinstalling the XP? By easiest I mean avoiding spending my time to prove Microsoft support I've got the right to use the software (insert your computer says noooo joke here) but still being able to get to fresh virgin activated legal state of the XP. I used to work as a sysadmin many years ago so I am not afraid of any technical difficulties. The same question applies to MS Office. I imagine the process would consist of backing up all the data, pulling some bits from the registry and using that on the fresh install. As for reinstall I'd expect to use some sort of OEM Windows repair CD from Dell, right? Are those freely available? My other box (HP) has such a thing and it can't be used on any other brand. I'm sure somebody had to go through this licensing hell and could share his/her tips. Thanks in advance.

    Read the article

  • Encrypted off-site data storage

    - by Dan
    My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China. We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser. I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas. And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.

    Read the article

  • Blank Page: wordpress on nginx+php-fpm

    - by troutwine
    Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include /etc/nginx/sites-enabled/*.conf; } /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php { server 127.0.0.1:9000; } server { error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice; access_log /var/log/nginx/us/sharonrhodes/blog/access.log; server_name blog.sharonrhodes.us; root /srv/apps/us/sharonrhodes/blog; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass php; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }

    Read the article

  • How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

    - by Eric Belair
    We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2000 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people clicked the link and brought the site to its knees. I am primarily a Website Programmer, but I often have to contribute to server administration at times. Sadly, I'm the resident ColdFusion and IIS expert, but I don't have a lot of experience with this issue. What are some basic steps that I can take to prevent this from happening in the future, since we cannot always control what files the client posts to the website. Here are some ideas I had, but I'm unsure of the impact: Limit the number of connections in IIS. Put media files on a separate server (like an Amazon site, etc.). File requests of this type currently behind a server-script (i.e. /www.site.com/viewFile.cfm?fileId=1424545, where the fileId references a file off the webroot) that logs requests, and pushes the file to the browser using CFCONTENT. I could edit this script to reject requests when they exceed a certain amount in a given time-frame (i.e. a 5MB can be accessed globally 10 times in an hour). This may cause some users frustration, but, if hundreds of users are attempting to view the file, the site is going to crash anyways, as it did today, which is way more frustrating, since there is no "pretty" message explaining why they can't get to the file. I'm open to any suggestions, as I'm continuing my research to report to the CTO with the best options, so that we can put a solution into effect. Thank you.

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) -- Some links of similar problems https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/573318 http://www.linuxquestions.org/questions/linux-newbie-8/lamp-install-on-lucid-mysqld-sock-missing-mysql-terminating-status%3D1-853152/ It seems it's a permissions problem... But I don't know which permissions I should change... SOLVED -- mysql error 2002 "cannot connect to socket"

    Read the article

  • How to stop a random ramp in FCGI Processes Killing the server

    - by Andy Main
    So got the below earlier to day... Around that time the logs show a ramp in processes(600) and associated memory (1.2g), cpu usage load average (80) untill the server gave out. Server had to be hard reset by host as there was no ssh or plesk panel access. Fast CGI is configured as below and is setup for one high use site. As I understand it FcgidMaxProcesses 20 should protect against what happen but has not. I've read many forums with differing answers and references to many different fcgi directives, but have found nothing conclusive. Any one got some definitive answers on how to stop this sort of server process ramping and subsequent server failure? If you need more info let me know. Cheers Andy  /var/log/apache2/error_log [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17650 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17649 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17644 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17643 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17638 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17633 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17627 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17622 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17674 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17673 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17672 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17667 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17666 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17665 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17664 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17659 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17658 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17657 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17656 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VRmFLWEZfR2VBb2M https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VWTcwZEhoV2Fqejg https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VUUtVWWFINHZjZ0U https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VZEVMclh6ZUdaOUE <IfModule mod_fcgid.c> <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl </IfModule> FcgidIPCDir /var/lib/apache2/fcgid/sock FcgidProcessTableFile /var/lib/apache2/fcgid/shm FcgidIdleTimeout 40 FcgidProcessLifeTime 30 FcgidMaxProcesses 20 FcgidMaxProcessesPerClass 20 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 30 FcgidIOTimeout 120 FcgidInitialEnv RAILS_ENV production FcgidIdleScanInterval 10 FcgidMaxRequestLen 1073741824 </IfModule>

    Read the article

  • How to connect to a Virtualbox guest from the host when network cable unplugged

    - by Greg K
    I'd like to work offline (I'm flying to the US twice this month), to do this I need access to a linux development server. When I work from home I boot a VirtualBox VM and that acts as my dev server for the day (providing Apache, PHP & MySQL to run my server side code). However, I'd like to work with my VM when I'm not connected to a network. I have my Ubuntu VM guest set up with a bridge connection so it can serve HTTP and provide SSH access from inside my local network. I've tried to manually configure my network settings on both Mac OSX (the host) and Ubuntu (the guest) but I can't even ping my own NIC address (127.0.0.1 can, 192.168.21.x I can't) in OS X when I unplug the cable. Manual network settings: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx inet 192.168.21.5 netmask 0xffffff00 broadcast 192.168.21.255 media: autoselect (100baseTX <full-duplex,flow-control>) status: active I can ping localhost fine, as well as my VM (.20) and SSH too. $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes 64 bytes from 192.168.21.5: icmp_seq=0 ttl=64 time=0.085 ms 64 bytes from 192.168.21.5: icmp_seq=1 ttl=64 time=0.102 ms 64 bytes from 192.168.21.5: icmp_seq=2 ttl=64 time=0.100 ms 64 bytes from 192.168.21.5: icmp_seq=3 ttl=64 time=0.094 ms $ ping 192.168.21.20 PING 192.168.21.20 (192.168.21.20): 56 data bytes 64 bytes from 192.168.21.20: icmp_seq=0 ttl=64 time=0.910 ms 64 bytes from 192.168.21.20: icmp_seq=1 ttl=64 time=1.181 ms 64 bytes from 192.168.21.20: icmp_seq=2 ttl=64 time=1.159 ms 64 bytes from 192.168.21.20: icmp_seq=3 ttl=64 time=1.320 ms Network cable unplugged: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx media: autoselect status: inactive $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes ping: sendto: No route to host ping: sendto: No route to host Request timeout for icmp_seq 0 ping: sendto: No route to host Request timeout for icmp_seq 1 Does OS X disable the NIC when the network cable is unplugged? Any way to stop it doing this?

    Read the article

  • WinXP - Having trouble sharing internet with 3G USB modem via ICS

    - by Carlos Nunez
    all! I've been banging my head against a wall with this issue for a few days now and am hoping someone can help out. I recently signed up for T-Mobile's webConnect 3G/4G service to replace the faltering (and slow) DSL connection in my apartment. The goal was to put the SIM in one of my old phones and use its built-in WLAN tethering feature to share Internet out to rest of my computers. I quickly found out that webConnect-provisioned SIMs do not work with regular smartphones, so I was forced to either buy a 4G-compatible router or tether one of my old laptops to my wireless router and share out that way. I chose the latter, and it's sharpening my inner masochistic self by the day. Here's the setup: GSM USB modem (via hub), ICS host - 10/100 Mbps Ethernet NIC, ICS "guest" - WAN port of my SMC WGBR14N wireless router in bridged mode (i.e. wireless access point). Ideally, this would make my laptop the DHCP server and internet gateway with the WAP giving everyone wireless coverage. I can browse internet on the host laptop fine. However, when clients try to connect, they get a DHCP-assigned IP from the laptop and are able to use the Internet for a few minutes before completely dying. After that happens, they are able to re-associate with the WAP and get IP addresses, but are unable to use Internet or resolve IP addresses until the laptop and router are restarted. If they do get access, it's very, very slow. After running Wireshark on the host machine, it turns out that this is because every TCP connection keeps getting RST. DNS seems to work. I would normally think the firewall is the culprit here, but when it drops packets, it drops them completely. The fact that TCP connections are being ACK'ed by the destination rules that out. Of course, none of the event Log isn't saying anything about what's going on. I also tried disabling power management on the NIC, since that's caused problems in the past; that didn't help either. I finally disabled receive-side scaling as per a Microsoft KB (that applied to Windows Server 2003, SP2) to no avail. I'm thinking of trying it with a different NIC (will be tough; don't have a spare Ethernet NIC around for the laptop), but I'm getting the impression that this simply doesn't work. Can anyone please advise? I apologise for the length of this post; all contributions are much appreciated! -Carlos.

    Read the article

  • open-sshd service withou pam support !! How can I add pam support to sshd? Ubuntu

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password

    Read the article

  • Forms authentication failed between web server and sql server

    - by Matt Bear
    I've actually found the solution, but I'm trying to understand why it failed, and why my solution fixed the problem. We have an application that uses forms authentication between a web server and sql server, web server runs server 2008, sql server runs 2008 r2, and sql server 2008. In august the sql server was patched with .net 3.5.1, the web server was untouched, and the forms authentication continued to work. 1 week ago we virtualized the web server onto our vSphere server because of failing hardware. Afterwards the form authentication failed with event code 4005, detail code 50201, The ticket supplied was invalid (on the sql server). In fact the sql server started generating Schannel errors and began blue screening 3-4 times a day. At this point I touched the sql server for the first time(ever), the errors were non specific, any reference to them I could find had to do with either zone alarm(which we don't run), or memory errors. So I applied service pack 1, which stopped the blue screening, but did not fix the forms authentication. At this point we had a work around, so we put it on the back burner while we completed another project, and I was able to get back on it last night. First thing was to adjust some code in the webconfig file on the sql server, nothing, next was regenerate and change out the machine key, still no change. Update the DNS servers, no change. Finally I went through and installed all windows updates, two reboots, (over RDP installed a network card driver which failed, and did not have my server room key, that was fun). After that, forms authentication was working again. And the sql server stopped generating as many errors, I've gotten two schannel errors since then. In short, forms authentication began failing when the web server was cloned onto a virtual machine, which caused the sql server to blue sceen? and forms authentication to fail. And could only be fixed by applying patches to the sql server?(I'm wishing I had patched the servers one at a time so I could know for sure which patch on which server fixed it). My question is why did it fail, and why did patching fix it? I hate fixing something without fully understanding the why and how.

    Read the article

  • TomCat starts, but does not load properly

    - by user37136
    Hey guys, I've been working on this for a day now and still don't know what's wrong. I am essentially building a second environment for our web and app server. I got apache to load up just fine, but tomcat is proving to be difficult. It appears to start and load just fine, but when it comes to loading our application, its just got stuck for 2-5 minutes and then shut down. Here is the log on the original machine where it works fine: 2010-02-12 11:52:40,506 INFO Web application servlet context is initializing... 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobType=[{1,Undefined}, {100,Completion}, {200,Plugging}, {300,R+M}, {400,Workover}, {500,Swab - tubing}, {600,Swab - fluid}] 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobTaskType=[{1,Undefined}, {100,Rod part}, {200,Tubing leak}, {300,Pump change}, {400,Stripping job}, {500,Long stroke}, {600,A/L optimization}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_wellType=[{1,Undefined}, {100,Rod pump}, {200,ESP}, {300,Injector}, {400,PC pump}, {500,Co-Rod}, {600,Flowing}, {700,Storage}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_assetType=[{1,Rig}, {100,Disabled rig}] 2010-02-12 11:52:40,542 DEBUG Servlet context attribute added: select_state=[{AL,Alabama}, {AK,Alaska}, {AZ,Arizona}, {AR,Arkansas}, {CA,California}, {CO,Colorado}, {CT,Connecticut}, {DE,Delaware}, {FL,Florida}, {GA,Georgia}, {HI,Hawaii}, {ID,Idaho}, {IL,Illinois}, {IN,Indiana}, {IA,Iowa}, {KS,Kansas}, {KY,Kentucky}, {LA,Louisiana}, {ME,Maine}, {MD,Maryland}, {MA,Massachusetts}, {MI,Michigan}, {MN,Minnesota}, {MS,Mississippi}, {MO,Missouri}, {MT,Montana}, {NE,Nebraska}, {NV,Nevada}, {NH,New Hampshire}, {NJ,New Jersey}, {NM,New Mexico}, {NY,New York}, {NC,North Carolina}, {ND,North Dakota}, {OH,Ohio}, {OK,Oklahoma}, {OR,Oregon}, {PA,Pennsylvania}, {RI,Rhode Island}, {SC,South Carolina}, {SD,South Dakota}, {TN,Tennessee}, {TX,Texas}, {UT,Utah}, {VT,Vermont}, {VA,Virginia}, {WA,Washington}, {WV,West Virginia}, {WI,Wisconsin}, {WY,Wyoming}, {ACO,Atlantic Coast Offshore}, {FOAK,Federal Offshore Alaska}, {NGOM,Northern Gulf of Mexico}, {PCO,Pacific Coastal Offshore}] 2010-02-12 11:52:40,542 INFO KeyviewContextMonitor.contextInitialized: Loaded drop-down lists:com/key/portal/web/common/lists.properties 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.SERVLET_MAPPING=*.do 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.ACTION_SERVLET=org.apache.struts.action.ActionServlet@155d578 2010-02-12 11:52:41,939 DEBUG Servlet context attribute added: org.apache.struts.action.MODULE=org.apache.struts.config.impl.ModuleConfigImpl@e08e9d 2010-02-12 11:52:41,962 DEBUG Servlet context attribute added: org.apache.struts.action.FORM_BEANS=org.apache.struts.action.ActionFormBeans@b31c3c 2010-02-12 11:52:41,967 DEBUG Servlet context attribute added: org.apache.struts.action.FORWARDS=org.apache.struts.action.ActionForwards@102c646 2010-02-12 11:52:41,973 DEBUG Servlet context attribute added: org.apache.struts.action.MAPPINGS=org.apache.struts.action.ActionMappings@127276a 2010-02-12 11:52:41,974 DEBUG Servlet context attribute added: org.apache.struts.action.MESSAGE=org.apache.struts.util.PropertyMessageResources@18cae13 2010-02-12 11:52:41,984 DEBUG Servlet context attribute added: org.apache.struts.action.PLUG_INS=[Lorg.apache.struts.action.PlugIn;@f875ae 2010-02-12 11:52:46,816 INFO Sucessfully loaded application properties com/key/core/properties/application On my second environment, it didn't execute the last line. I start tomcat with the exact same command line !/bin/ksh export JAVA_HOME=/app/java export CATALINA_HOME=/app/tomcat export CATALINA_BASE=/app/keyview/appserver CATALINA_OPTS=" -Xms128m -Xmx800m -Dapplication.props=com/key/core/properties/application -Dlog4j.configuration=com/key/core/log/log4j.xml -Djava.awt.headless=true -Dlog4j.debug" export CATALINA_OPTS ${CATALINA_HOME}/bin/startup.sh I bolded the line that I think are in error. Thanks

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >