Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 700/956 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • Basic OpenVPN setup

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

  • Ubuntu 11.10 ATI Drivers vesa park

    - by Matthias
    This is probably not an issue, from all I can get it seems my hardware and drivers are properly installed. However when I go to system settings - system info - graphics. I get Driver: VESA:PARK. Experience: Standard. my graphics card is a: Ati Mobility Radeon HD 5470 512MB. I am pretty sure it's not a same-die GPU since there is a fan exhaust at the side of my laptop which I presume is the exhaust for the GPU... I have no clue whatsoever what this means. I installed the ati drivers first using the 'additional drivers' method. However I also decided to look a manual installation up via the terminal since I've had problems before with Ubuntu and ati cards. I used wget and something among the lines of sh dpkg -i. I can recall exactly, I took them from another stackoverflow answer. Anyway, it seems everything is installed properly since it shows up with these commands: sudo lshw -C video fglrxinfo however the first command seems to detect hardware, not the driver per se, although the driver is probably needed to detect the hardware anyway which would indicate its properly installed. I am still not sure about that VES:PARK thing though. I'd like to know what it means.. Also, if someone happens to know a good way of testing if the gpu is connected/being used...some sort of benchmark maybe...I'd like to hear it. P.s. I can find my way around in Ubuntu but I would probably still be considered a rookie by more experienced users.

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • What does a DHCP-client consider to be the "best" answer?

    - by Nils
    We have training rooms where normally Windows XP is installed (via PXE). The "normal" DNS/DHCP infrastructure are Windows-Servers. The training room has its own VLAN (different from the Windows servers), so there is most propably an IP helper for DHCP requests active on the Cisco router where all PCs from that room are connected to. Now we wanted to convert some of the PCs to Linux instead. The idea was: Put our own Laptop with a DHCP server into the VLAN of the room and override the "normal" DHCP response. The idea was that this should work, since a directly attached DHCP server in that VLAN should have a faster response-time than the "normal" DHCP server located some hops away from that VLAN. It turned out that this did not work. We had to manually release the lease on the original DHCP server to get it working. On the Laptop we did see the client requesting the IP and "our" dhcp was sending NACKs to the Windows IP request, before that we did offer our own response. Old Question: Why did this not work out as expected? What is making the PC regain its old lease? Update 2012-08-08: The regain-issue has been explained in the DHCP-RFC. Now this explains why the PC regains its old lease. Now we do release the IP from the Windows-DHCP-server before giving it another try. Again - the Windows-DHCP-server wins. I suspect that there is some algorithm for the dhcp-client which determines the "best" dhcp-answer for the client. The new question is: How does the client choose the "best" answer?

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

  • ColdFusion 9 server not restaring - “Permission denied” errors

    - by Xevi Pujol
    I had to restart my ColdFusion 9 server on CentOS because of a memory performance issue, but now the server won't restart again. When looking at cfserver.log I can see how there's "Permission denied" errors all along. The ColdFusion application folder (/opt/coldfusion9/) is owned by nobody:root, as that fixed a similar problem that we had a few weeks ago. Also, the last time this CF server was running correctly, the JRE user that was being used was nobody. Maybe CF is trying to restart using another user (presumably apache) and that creates permission issues? However, I'm not sure how to check this hypothesis. Where's the config file that tells CF what JRE user to utilize? If I can change that, I could try to specify nobody there. Any other ideas also welcome. UPDATE: The runtime user that Coldfusion will utilise is defined in /etc/init.d/coldfusion_9. I fixed the problem by being consistent with the users: I needed to revert the ownership of the folder /opt/coldfusion9/ back to apache:root, which matches the init file.

    Read the article

  • WinDbg Problem with ntoskrnl

    - by Wilf
    I've got a similar problem to "BSOD - Unable to verify timestamp for ntoskrnl.exe", in that I can't seem to get the correct symbols to read ntoskrnl. I've followed the advice given by BK1E, but still can't get a result. Text from debug below: Loading Dump File [C:\Users\XXXX\AppData\Local\Temp\WER9D78.tmp\Mini030610-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*c:\Windows\Symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Windows Server 2008/Windows Vista Kernel Version 6002 (Service Pack 2) MP (4 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Personal Machine Name: Kernel base = 0xfffff800`01e59000 PsLoadedModuleList = 0xfffff800`0201ddd0 Debug session time: Sat Mar 6 14:08:20.516 2010 (UTC + 0:00) System Uptime: 0 days 0:42:01.723 Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Loading Kernel Symbols ............................................................... ................................................................ ......................... Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {11, c, 0, fffff80001ec9489} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. How do I fix this issue? OS is Windows Vista x64 SP2.

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • 2000 Server, User can't logon

    - by Mike I
    I hope you can help me. I recently upgraded a workstation at my office (to a whole new machine) and ran into a pretty serious problem. Friday until 5:00 PM, I could access my mail on 2000 Exchange server. When I shut the old workstation down and put in the new workstation, I tried to set up an account. When I put the server name in appropriate field and typed my username and hit check names, my username does not come up. So to troubleshoot, (It also is a SMB server) I try to logon to my file share. (My local credentials are the same as server credientials of user account) When I try to logon to share, I just get the Username/Password screen (Never had gotten that before since credentials are the same) Again, in troubleshooting mode, I try to log on to my user from another workstation. Still can't authenticate via my user. Every other user can authenticate and load up their shares/mailboxes. I have restored Exchange from the backup as of 3 days ago (Thursday) but the exact same issue is still there. I really do not understand what is wrong and what else I can do to troubleshoot. If anyone has some pointers for me, I will surely accept them. Thanks, Mike

    Read the article

  • Why am I getting blank error messages in my Apache error log?

    - by Jason Lamoreux
    I am running Apache 2.2 on 64bit Windows Server 2008 Std edition with ActivePerl 5.8.9. My error log is filling up with blank error messages like these: [Wed Mar 31 14:08:31 2010] [error] [client 10.6.1.164] [Wed Mar 31 14:10:32 2010] [error] [client 10.6.1.89] [Wed Mar 31 14:13:20 2010] [error] [client 10.6.1.131] By looking in the access log I can tell that it occurs when our client machines issue a GET to a very simple Perl script. #!perl.exe use strict; no warnings; $|=1; use CGI::Carp('fatalsToBrowser'); use CGI qw(:standard); print header; my $CRLF = "\r\n<br>"; my $Port = '10116'; print "Success!${CRLF}PollInterval=5${CRLF}LMProMode${CRLF}Version=7${CRLF}ConnectionPort=$Port"; exit; The weird thing is that it does not look like this error message is inserted every time a GET to this Perl script occurs. What could cause this error message to appear in the Apache error log?

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • Updating a backup image (.wim and/or Acronis .tib)

    - by Backdraft
    Anyways, I've got a Windows 7 installation that I want to make a generalized backup image of so I can use it for future installs on not only my desktop from which the image is to be derived from, but also other systems with dissimilar hardware. Therefore I've arrived at either 2 options, using either sysprep/imagx from WAIK (guide here), or the simpler Acronis True Image w/ their Universal Restore addon. Of course, they create distinct image file types, .wim and .tib respectively. What I'd like to do is to periodically update this image, say with Windows Updates, by booting it to either a physical partition or using virtualization (VirtualBox/VMWare), perform the updates, and save the updated .wim or .tib image file again. What's the simplest way I could do this? Another question is, I created this generalized backup image on a 500GB Seagate 7200RPM HDD. Say I get an SSD as an OS drive in the future, can I just deploy this backup image to the SSD normally, or are there any potential problems to be aware/avoid (ie. is it best to completely reinstall the OS on the SSD from scratch, or can I use the image created on the normal HDD with no issue)? Thanks and Happy Holidays.

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • DNS NAmeserver Aname and cname records

    - by David
    Hi - I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with fasthosts in the uk. When I accepted the domains I could not access the DNS settings and enquired with fasthosts as to why. The replied saying 'The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account.' When i log into the fasthost control panel now i can access the DNS controls but both domains have no A Record of Cname record set up. I am concerned that fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. i am worried i will lose mydomain1 to as teh dns changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. one thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on fasthosts for the aname record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • Fresh Proxmox VE 2.1 installation with defaults can't be reached or pinged

    - by Damainman
    I am using the lastest Proxmox VE 2.1. My server has two NICS with a uplink only connected into eth0. My Server is a co-located server utilizing public IPv4 IPs. It is not behind a firewall or any system which monitors traffic. Via IPKVM I did a fresh install of Proxmox, I put in the correct IP, Mask, Gateway, and DNS information. The install went perfectly fine with no errors. Upon completion and rebooting the system: I am unable to reach the web GUI via the browser, it just times out. I am unable to ping the server. I am unable to ping outside to the Internet from within the server. Tried pinging out to 4.2.2.2 and yahoo.com I tried rebooting the server and restarting the network service. IFCONFIG shows my IP information under vmbro0 which also has the same MAC address as the eth0 device. eth0 only displays a IPv6 Scope:Link address, which I did not setup myself. This is my first time installing proxmox, but after searching for a few hours it doesn't seem like anyone else is having the same issue as me from a fresh install with just the defaults. So far the only thing I did was install it. Also, I know the network cable is good and the IP is good because I was running a Xen XCP server with the same network settings prior to wiping it to install proxmox. Some additional information: for pveversion -v (Installed proxmox-ve_2.1-f9b0f63a-26.iso) pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a) running kernel: 2.6.32-11-pve proxmox-ve-2.6.32: 2.0-66 netstat -nr (note: .136 is my network, and .137 is my gateway) Destination - Gateway - Genmask xxx.xxx.xxx.136 - 0.0.0.0 - 255.255.255.248 0.0.0.0 - xxx.xxx.xxx.137 - 0.0.0.0 /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address xxx.xxx.xxx.138 netmask 255.255.255.248 gateway xxx.xxx.xxx.137 bridge_ports eth0 bridge_stp off bridge_fd 0

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >