Search Results

Search found 12546 results on 502 pages for 'aidan host'.

Page 418/502 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • _default_ VirtualHost overlap on port 443, the first has precedence

    - by Mohit Jain
    I have two ruby on rails 3 applications running on same server, (ubuntu 10.04), both with SSL. Here is my apache config file: <VirtualHost *:80> ServerName example1.com DocumentRoot /home/me/example1/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example1.com DocumentRoot /home/me/example1/production/current/public SSLEngine on SSLCertificateFile /home/me/example1/production/shared/example1.crt SSLCertificateKeyFile /home/me/example1/production/shared/example1.key SSLCertificateChainFile /home/me/example1/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> <VirtualHost *:80> ServerName example2.com DocumentRoot /home/me/example2/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example2.com DocumentRoot /home/me/example2/production/current/public SSLEngine on SSLCertificateFile /home/me/example2/production/shared/iwanto.crt SSLCertificateKeyFile /home/me/example2/production/shared/iwanto.key SSLCertificateChainFile /home/me/example2/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> Whats the issue: On restarting my server it gives me some output like this: * Restarting web server apache2 [Sun Jun 17 17:57:49 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence ... waiting [Sun Jun 17 17:57:50 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence On googling why this issue is coming I got something like this: You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. But not able to figure out how to run two ssl application on same server. Can any one help me?

    Read the article

  • How should I deploy my JVM-based web application on ubuntu?

    - by Pieter Breed
    I've developed a web application using clojure/compojure (JVM based) and while developing I tested it using embedded jetty that runs on 0.0.0.0:8080. I would now like to deploy it to run on port 80 on ubuntu. I do dynamic virtual hosting, so any request for any host that arrives on port 80 should be handled by my application. The issues that worries me are: I can still run it embedded but I'm worried about running my app as root (needed for binding to port 80). I'm not sure if I can 'give up root' when in the JVM. Do I need to be concerned by this? besides, serving web applications is a known problem and I should be using known solutions for this (jetty or tomcat) but especially tomcat seems very heavy weight. Besides, I only have one application that listens to /* and does routing internally. (with compojure/ring). What I'm trying to say with this is that tomcat by default assigns WARs to subfolders which I don't want. So basically what I need is some very safe way of binding to port 80 on ubuntu that can with minimal interference send all requests to my app. Any ideas?

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

  • How to debug modsecurity_audit_log

    - by max87
    I was accessing www.example.com/RestAPI/index.php/tweets.json in my server. The modsec_audit.log showed the following error, but there is no related errors/warnings in modsec_debug.log. I could see the Internal Server error is logged in example-error_log. How can I debug this Internal Server error? --8560e90b-A-- [21/Mar/2012:07:01:52 +0000] T2l84H8AAAEAAGxPZ@QAAAAG x.x.x.x 33101 x.x.x.x 80 --8560e90b-B-- GET /RestAPI/index.php/tweets.json HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Cookie: __utma=159129855.1463065063.1331789485.1331789485.1331789485.1; __utmz=159129855.1331789485.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 8cb6a414cf5ec1919864de0e80bea4da=0es7dcu0p10cocfpferb2lddi0; 8926e4f3c475bb6fcacb409299f1bd27=53cf8c5e6bf78ea45096945377e6d609 Connection: keep-alive Cache-Control: max-age=0 --8560e90b-F-- HTTP/1.0 500 Internal Server Error X-Powered-By: PHP/5.3.5 Content-Length: 0 Connection: close Content-Type: text/html; charset=UTF-8 --8560e90b-H-- Apache-Handler: php5-script Stopwatch: 1332313312358005 130428 (- - -) Producer: ModSecurity for Apache/2.5.12 (http://www.modsecurity.org/); core ruleset/2.0.5. Server: Apache --8560e90b-Z--

    Read the article

  • Xen Bridge only working when IP Assigned

    - by m.sr
    Hey! Just had an (in my sense) obscure situation. I have a Xen Server with bridged networking. Everything works fine since month. A while ago i configuresd a second bridge. only some DomUs get an channel on this bridge - my Dom0 doesn't need to / should'nt use this bridge. So just 5 minutes ago while rebooting the xen host (because of an other problem with the UPS) i decided to removed the fixed ip from the the interface of the Dom0 which belongs to the second bridge. So after reboot i noticed that none of the interfaces on the second bridge is available. I couldn't find a problem. Everything was just like before the reboot, except the interface of the Dom0 had no IP address. After a while i tried to give the DomO interface of this bridge an IP again and ... BOOM ... everything is up and running again! WTF? Why is it important to have the interface of a bridge configured in the Dom0? Even when confiugured 'wrong' (complitely different netowkr settings as the network really hanging on the bridge) everythjing works fine ... I don't get it. Could please someone explain? Tnaks a lot!

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • HTTP cache for my virtual machines

    - by MathematicalOrchid
    I have several Linux virtual machines running on my home PC. One of the quirks of Linux is that every time you run a package manager, it wants to "refresh" the configured software repositories - which basically means it wants to download a file from the Internet. If I revert to an earlier snapshot of the VM, then next time I run the package manager it will re-download the exact same data again [since it no longer exists in the VM]. It seems a shame to waste bandwidth endlessly downloading the same data over and over again, so I was wondering if there's some way I can set up some kind of HTTP proxy server that caches downloaded files. I have no idea how you would do such a thing though. In particular, it needs to be set up so that the VMs don't need to "know" that the cache is there; it needs to be transparent. But I don't know how to do that. Any suggestions on what software I'd need to use? It would be nice if I could run it under the Windows host OS, but running a small VM with a Linux guest is also possible...

    Read the article

  • LDAP Account Locked Out Sporadically after Password change - Finding the source of invalid attempts

    - by CityView
    On a small network of machines (<1000) we have a user whose account is being locked out after an indeterminate interval following a password change. We are having severe difficulties finding the source of the invalid logon attempts and I would appreciate it greatly if some of you could go through your thought process and the checks you would perform in order to fix the problem. All I know for sure is that the account is locked out several (5+) times a day, I can't even be sure it's due to failed login attempts as there is no record of failure until the account is locked. So far I have tried; Logging the account out of everything we can think of and back in with the new password Scanning the user's box for any non standard software which might perform an LDAP lookup Checking all installed services on our production boxes to check none are attempting to run under the account Changing the user back to their old password (Problem persists so perhaps password change is a red herring) Wireshark on a box where lots of LDAP authentication is performed - Rejects only occur after account is already locked out Clearing the credential cache in - Control Panel - User Accounts - Advanced Looking at the local I'm at a loss for what to try. I am happy to try any suggestions you have in order to diagnose the issue. I think my question boils down to a simple request; I need a technique for deriving the source (Application/Host) of the invalid login attempts which are causing the account to be locked. I'm not sure if that's even possible but I suspect there must be more I can try. Many thanks, CityView

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Deleted printers keeps coming back - and multiply

    - by MojoDK
    My users are on 2012 R2 RDS Session Host servers. I've used "Deploy Printers" (from Print Manager) to deploy 4 printers. The last week, I've had a lot of problems where users can't print. If I deleted the printer and added it again, they could print just fine. Now I've removed all printer deploying from GPO - and I have no printers in any login scripts. I did a gpupdate /force, but all the 4 printers are now listed 3 times... If I delete the printers and log off and back on, all the printers are popping up again. Sigh! This is driving me nuts. This script doesn't show any of the "SVFREJA" printers... Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Set colPrinters = objWMIService.ExecQuery ("Select * From Win32_Printer") If colPrinters.Count <> 0 Then 'If there are some network printers Dim s s = "" For Each objPrinterInstalled In colPrinters ' For each network printer s = s + objPrinterInstalled.Name + chr(13) Next msgbox s End if It gives me this result... (sry for the big picture) My problem is not with the "redirected" printers, my problem is that I have several printers with the same name (on SVFREJA) and I can't get rid of them. Any idea why I can't get rid of the "ophaned" printers??

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Updating WordPress 3.6 to 3.7 via admin area on Nginx VPS hangs and fails

    - by harryg
    So I have a few WordPress sites running on my VPS (Ubuntu 12.10, Nginx, php-fpm 5.4) The sites are all on seperate vhosts and use their own config files (albeit similar to each other) and vary in complexity. One is very simple and uses minimal plugins. When I try to update core on any site via the admin area I click the "Update Now" button (which should run the script in wp-admin/update-core.php the page hangs for a minute or two before going to a blank admin page (i.e. the wp-admin menu bars and header bar are there but there is no content in the body of the page). Visiting another admin page via the still menu bar reveals that the core has not been updated. Checking the error log I see this entry: 2013/10/29 23:20:48 [error] 9384#0: *5318248 upstream timed out (110: Connection timed out) while reading upstream, client: --.---.--.---, server: www.mysite.com, request: "POST /wp-admin/update-core.php?action=do-core-upgrade HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "mysite.com", referrer: "http://mysite.com/wp-admin/update-core.php" This didn't happen in the past on older updates and the rest of the site including updating plugins works fine. Any ideas? Could it be as simple as a time-out error? I find that unlikely as the server should munch though a wp upgrade in seconds.

    Read the article

  • Wake for Network Access Apache servr in OS X 10.8, followup

    - by Gary
    Sorry, I can't seem to post this response within the same thread. Thank you both (Zoredache and Gordon) for your answer. But the fix seems temporary. I entered the command you suggested, and it seemed to work: ...smith$ Registering Service ApacheNoDoz._http._tcp.local port 80 DATE: ---Fri 14 Sep 2012--- 12:04:15.813 ...STARTING... 12:04:16.566 Got a reply for service ApacheNoDoz._http._tcp.local.: Name now registered and active So, I checked for it on my G5: Browsing for _http._tcp Timestamp.....A/R Flags if Domain......Service Type...Instance Name (lots of Bonjour printers omitted)... 12:07:38.370..Add.....2..4 local.......... _http._tcp.........ApacheNoDoz 12:07:45.921..Rmv.....0..4 local..........._http._tcp.........ApacheNoDoz So, it was running at 12:07:38, at which time the host was asleep. But, shortly after, the activity seems to have been removed. I don't know why. Does this mean that I can never let the cpu sleep, or is there something else I have to set? Thanks, again.

    Read the article

  • Slow browsing through IE on Windows Server 2012

    - by Volodymyr
    We've run into strange issue on the freshly installed servers. H/W: IBM server X3550 M4 7914; OS: Windows Server 2012 Std. Then we try to browse on the servers thru IE, not all sites are opened or it takes too long time to open the page, i.e. very few of them can be opened. Local FW are disabled. Servers are in a new subnet and traffic is allowed for it. VLAN is configured properly Another Windows Server 2012 host is running OK and Internet access works fine, but it is VM running on Hyper-V 2012. No proxy is used on the network. At the same time, if one tries to establish telnet session to any site on 80/443 ports - it does work. Google works as well. I've tried to configure single Qlogic adapter to check if the issue remains - it does. Teaming is configured with the means of QLogic, not by built-in functionality. IE Enhanced Security is disabled. IE settings were reset, more than once. Why would certain sites work while others not - Idk. I also tried to disable ecncapability and restart server - no luck netsh int tcp set global ecncapability=disabled Any thoughts? UPD1 VMQ is disabled. Servers are not running Hyper-V. UPD2 Servers were rebuilt from scratch, got a mail a few mins ago. Issue still remains. Teaming is now configured with the means of Windows Server 2012.

    Read the article

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    Hello, We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • How can I setup a Proxy I can sniff traffic from using an ESX vswitch in promiscuous mode?

    - by sandroid
    I have a pretty specific requirement, detailed below. Here's what I'm not looking for help for, to keep things tidy and on topic: How to configure a standard proxy Any ESX setup required to facilitate traffic sniffing How to sniff traffic Any changes in design (my scope limits me) I need to setup a test environment for a network-sniffing based HTTP app monitoring tool, and I need to troubleshoot a client issue but he only has a prod network, so making changes to the config on client's system "just to try" is costly. The goal here is to create a similar system in my lab, and hit the client's webapp and redirect my traffic - using a proxy - into the lab environment. The reason I want to use a proxy is so that only this specific traffic is redirected for all to see, and not all my web traffic (like my visits to serverfault :P). Everything will run inside an ESX 4.1 machine. In there, there is a traffic collection vswitch in promiscuous mode that is not on the local network for security reasons. The VM containing our listening agent is connected to this vswitch. On the same ESX host, I will setup a basic linux server and install a proxy (either apache + mod_proxy or squid, doesn't matter). I'm looking for ideas on how to deploy this for my needs so I can then figure out how to set it up accordingly. Some ideas I've had were to setup two proxies, and have them talk to eachother through this vswitch in promiscuous mode, but it seems like alot of work. Another idea is a dual-homed proxy, but I've never seen/done that before so I'm not sure how doable it is for what I'd like. I am OK with setting up a second vswitch in promiscuous mode to facilitate this if need be, but I cannot put the vswitch on the lan (which is used so my browser would communicate with the proxy) in promiscuous mode. Any ideas are welcome.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Trying to send email from nagios

    - by batman
    I'm very new to Nagios. I'm trying to send email alerts. But that doesn't seem to be working. But in my log of nagios I can see this : SERVICE ALERT: Appserver;Tmp directory;CRITICAL;HARD;1; Where host notifications are generated via email, only service alerts are not working. And when I look at sendEmail log I can see this : Sep 14 12:38:39 x.x.x.x. sendEmail[23005]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:39:39 x.x.x.x.x. sendEmail[23129]: ERROR => You must specify a 'from' field! Try --help. Sep 14 12:40:39 x-x-x-x-x sendEmail[23233]: ERROR => You must specify a 'from' field! Try --help. Where I'm making the mistake? Thanks in advance.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >