Search Results

Search found 15150 results on 606 pages for 'azure services'.

Page 521/606 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Reduce power consumption of gaming computer while idle

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 clocked @ 3.3 Ghz 3x DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 oc'd at 800/1600/1850 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) I've already seen this post Reduce power use on computer and this post How do I lower power consumption of my computer and while useful, I'm looking for answers specific to my build and OS. I'm pretty sure this build is a energy-intensive build by default, but I want to try to reduce the amount of energy my build uses when I leave it idle (when I go to bed or go out, etc). The first requirement for this machine is that I need to leave it on, so I cannot turn it off while it's being unused. I run it as a file server for personal reasons and I also leave it on in case people leave me messages on various IM services and chat clients (IRC, MSN, Steam, XFire, Pidgin, etc). I'm also unable to replace the parts in my computer with a cheaper "greener" part. What are some ways to minimize the amount of power the machine uses? I'm already using a high efficiency power supply (80 Plus Gold), but I imagine there's other things that can be done in the BIOS and Windows' power settings to reduce power usage while I'm not using the computer. From what I can tell, I can't use Sleep since that'll disable network access (whole reason why I leave the computer on in the first place). I already turn off my monitor when it's not in use. I enabled Intel SpeedStep within the BIOS (I know, I have a 965 and why am I enabling SpeedStep?) Should I bring the graphics card back to stock speeds and lower the clock on the processor even more? Main reason why I'm asking is I think this computer alone is the reason why my power bill is high, so I want to reduce its consumption to as low as possible without having to shut the thing down.

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Swap static public IPs without creating DNS conflicts?

    - by Jakobud
    Our ISP is Comcast and we have 5 static public IPs from them that we use for various services, including customers connecting to our network, VPN, web, DNS, etc... We need more IP addresses from Comcast. Unfortunately, Comcast is telling us that they can't just simply give us 5 more addresses. They only give static IP addresses in blocks of 1, 5 or 13. In order for us to get more static IPs, they have to take away our current 5 static IPs and give us 13 new ones. How do we make this transition without causing all sorts of DNS chaos? We run public DNS servers, so we can make the DNS changes ourselves, but it will take some time obviously for those DNS changes to propagate throughout the internet. Are there any easy ways to make this transition? Like create some type of fallback DNS entry or something? Surely there must be some sort of procedure for this kind of thing. The Comcast support guy was useless.

    Read the article

  • How do I force a server to leave a SharePoint farm

    - by Stefan
    I have two web servers in a SharePoint (WSS 3.0) farm with one database server for the config and content databases. I already moved my content databases to a new database server successfully. But when I tried to move the sharepoint config database using the "stsadm deleteconfigdb" and "stsadm setconfigdb" commands, one of my servers got stuck in an intermediate state. I was able to join one of the web servers with the config database on the new server, but the other server is not able to join because it believes it is already part of the farm (which it used to be, before the move). On the central administration it says the status of the services on the server is "stopping". Even after rebooting all servers involved, uninstalling SharePoint and what not, this status does not change, and because of it, I am not able to join the second server with the new config database. I get random error messages when trying to join the farm. I believe that if I can unstuck this server, it will be able to join the farm again. The farm believes the second server is already part of it, but the web server itself knows its not. Any ideas on how to forcefully kick out a server from the farm?

    Read the article

  • Error regarding DNS - "... must be able to resolve names ..." (Windows Server 2008 R2 installation)

    - by Scolytus
    I'm trying to replace our old Windows 2000 Server by a Windows Server 2008 R2. I followed the guide at MSDN. Coming to the step "Install Active Directory Domain Services..." the option to install the DNS-Server was grayed-out. According to Microsoft Support I skipped the DNS Server Installation at this point. (Because of the single-label DNS name) I then installed the DNS-Server role and created a forward-lookup-zone for the domain. When running the Best Practices Analyzer of the DNS-Server role I get these two messages for both domain controllers (the old win2k and the new win 2008 R2): The DNS server [IP address] on [adapter name] must be able to resolve names in the primary DNS domain zone The DNS server [IP address] on [adapter name] must be able to resolve names in the forest root domain name zone The TechCenter articles suggest to use a proper DNS Server - that's pointless when I try to configure a proper DNS Server. How do I configure the DNS Server in a way that it resolves these zones? Or are these errors irrelevant? dcdiag /v /test:DNS Seems to run fine...

    Read the article

  • Virtual Wifi Issue Windows 7

    - by Matt
    Lately I've been trying to use my laptop as a wireless router in my room. I have it connected to my school's network through ethernet, and I want to set up wireless so that I can use Wifi on my Android phone and iPod Touch. In the past, I used Connectify, but I started having an issue where my phone would find the network, connect, attempt to get the IP, and then suddenly the network would disappear. Then it'd pop up again, and the same process would happen over and over. I decided that I'd totally uninstall Connectify, but after that, neither Virtual Router Manager nor the command prompt could create a viable network either. My phone and even my iPod now encounter the same problem. Neither can successfully connect. So evidently there is something wrong with the laptop's virtual wifi feature, and I have no idea what that could be. I've tried enabling certain services that virtual wifi supposedly relies on, but some of them don't start, namely Remote Access Connection Manager. But I also have read that these enable on their own and that if they are normally not enabled it's fine. Furthermore, I even uninstalled and reinstalled the drivers for my wireless card. Any ideas as to why my virtual wifi won't function? Anything? I really would love to get this working...

    Read the article

  • Iptables: "-p udp --state ESTABLISHED"

    - by chris_l
    Hi, let's look at these two iptables rules which are often used to allow outgoing DNS: iptables -A OUTPUT -p udp --sport 1024:65535 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT My question is: How exactly should I understand the ESTABLISHED state in UDP? UDP is stateless. Here is my intuition - I'd like to know, if or where this is incorrect: The man page tells me this: state This module, when combined with connection tracking, allows access to the connection tracking state for this packet. --state ... So, iptables basically remembers the port number that was used for the outgoing packet (what else could it remember for a UDP packet?), and then allows the first incoming packet that is sent back within a short timeframe? An attacker would have to guess the port number (would that really be too hard?) About avoiding conflicts: The kernel keeps track of which ports are blocked (either by other services, or by previous outgoing UDP packets), so that these ports will not be used for new outgoing DNS packets within the timeframe? (What would happen, if I accidentally tried to start a service on that port within the timeframe - would that attempt be denied/blocked?) Please find all errors in the above text :-) Thanks, Chris

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • Changing default openVPN IP in linux server

    - by Lamboo
    The problem is that we have a public OpenVPN service. Pay €9.95 and you get an OpenVPN account at currently half a dozen of servers for a month. This means there are always and will always be some people who create a certain amount of abuse or trouble. On the long run, the external IP every OpenVPN user gets assigned is prohibited from editing Wikipedia, it might be banned by e-gold and on some popular webforums, one-click-hosters, etc. Not a pleasant experience for the 97% of our customers who use our service responsibly and legitimately to regain their privacy. So even if I could change the assigned external IP every few months; e. g. from 216.xx.xx.164 to 216.xx.xx.170, it would help us a lot to combat this abuse and to provide our paying clients with "fresh" IP addresses that aren't banned or restricted on some popular Internet sites and services, yet. Does anybody know how to change the first IP address assigned to the public interface in CentOS? So that e.g. OpenVPN in future doesn't give our OpenVPN clients the external IP 123.xx.xx.164 but rather 123.xx.xx.170?

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • Firewall for internal networks

    - by Cylindric
    I have a virtualised infrastructure here, with separated networks (some physically, some just by VLAN) for iSCSI traffic, VMware management traffic, production traffic, etc. The recommendations are of course to not allow access from the LAN to the iSCSI network for example, for obvious security and performance reasons, and same between DMZ/LAN, etc. The problem I have is that in reality, some services do need access across the networks from time to time: System monitoring server needs to see the ESX hosts and the SAN for SNMP VSphere guest console access needs direct access to the ESX host the VM is running on VMware Converter wants access to the ESX host the VM will be created on The SAN email notification system wants access to our mail server Rather than wildly opening up the entire network, I'd like to place a firewall spanning these networks, so I can allow just the access required For example: SAN SMTP Server for email Management SAN for monitoring via SNMP Management ESX for monitoring via SNMP Target Server ESX for VMConverter Can someone recommend a free firewall that will allow this kind of thing without too much low-level tinkering of config files? I've used products such as IPcop before, and it seems to be possible to achieve this using that product if I re-purpose their ideas of "WAN", "WLAN" (the red/green/orange/blue interfaces), but was wondering if there were any other accepted products for this sort of thing. Thanks.

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Hosting WCF over Internet

    - by karthik
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem. Any help will be appreciated. Thanks Karthik

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • VirtualBox: Can't get Bridged Networking to work (Win7 host)

    - by MikeTheTall
    I'm trying to set up a virtual LAMP server, including sharing files between the guest OS (Ubuntu Server) and the host OS (Windows 7) using samba. I think my problem is that I can't get Bridged (or Host-Only) networking to work in VirtualBox. I can boot the Linux VM just fine with NAT, but then can't access any services on it directly (except after port-forwarding port 80)(my understanding is that port-forwarding works because I'm not running a web server on the host OS, and therefore it can forward traffic to the unused port 80). I don't think that port-forwarding samba traffic (from the host to the guest) will work since I think that the host OS is using those ports. When I turn off NAT and turn bridged networking on I get an error. The VM fails to boot, with a dialog popping up (title: VirtualBox - Error) that says "Failed to open a session for the virtual machine UbuntuServer. Configuration error: Failed to get MAC address (VERR_CFGM_VALUE_NOT_FOUND). I'm hoping that once this is resolved then samba will work ok :) Any advice on this would be great (how to fix it would be wonderful, next steps for troubleshooting would be great, too :) )

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >