Search Results

Search found 8990 results on 360 pages for 'customer contact'.

Page 294/360 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Proper configuration for Windows SMTP Virtual Server to only send email from localhost, and tracking down source of spam emails

    - by ilasno
    We manage a server that is hosted on Amazon EC2, which has web applications that need to be able to send outgoing email. Recently we received a notice from Amazon about possible email abuse on that server, so i've been looking into it. It's Windows Server Datacenter (2003, i guess), and uses SMTP Virtual Server (you know, the one that requires IIS 6 for admin). The settings on the Access tab are as follows: - Authentication: Anonymous - Connection: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) - Relay: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) In the SMTP logs there are many entries like the following: 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 FROM: 0 0 4 0 26364 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 TO: 0 0 4 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26707 SMTP - - - - ([email protected] is sending quite a lot of emails :-/) Can anyone confirm if the SMTP server settings seem correct? I'm also wondering if a web application on the machine could be exposing a contact form or something that would allow this sort of abuse, looking into that (and how to look into that) further.

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • How to install wordpress without a web browser

    - by bvandrunen
    What I am trying to do is to automate wordpress website creation for the company I am working on. We have lots of information in our database for our customers and we want to create a wordpress website for each customer. The process works great and we have no trouble with the creation of websites/transfer of data or anything like that. The problem we do have is when we buy a new domain (http://www.newdomain.com) our process breaks (we call a stored procedure which installs all the data after the URL is called to install wordpress) if the domain takes more than 15min to resolve. We have tried doing looping (where the process checks to see if the domain resolves and keeps trying - but eventually if fails). So what we are looking for is to see if there is a way to install an URL without actually having the domain resolve yet. I have seen where possibilities where you can change the wp-config file but this doesn't work since we have more than one domain and it changes the source URL for all the domains. What we really need is just a way for us to manually start the install script through a call either through a database or some other way that doesn't check to see if the domain is resolved or pointing at the server or not. Thank for any suggestions. EDIT: All we do to install wordpress is call this URL: http://"newdomain".com/wp-admin/install.php?step=2 - if you change settings in the backend calling this URL will install wordpress without having to go through the wp-admin/install.php form

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • Linux as a router for public networks

    - by nixnotwin
    My ISP had given me a /30 network. Later, when I wanted more public ips, I requested for a /29 network. I was told to keep using my earlier /30 network on the interface which is facing ISP, and the newly given /29 network should be used on the other interface which connects to my NAT router and servers. This is what I got from the isp: WAN IP: 179.xxx.4.128/30 CUSTOMER IP : 179.xxx.4.130 ISP GATEWAY IP:179.xxx.4.129 SUBNET : 255.255.255.252 LAN IPS: 179.xxx.139.224/29 GATEWAY IP :179.xxx.139.225 SUBNET : 255.255.255.248 I have a Ubuntu pc which has two interfaces. So I am planning to do the following: eth0 will be given 179.xxx.4.130/30 gateway 179.xxx.4.129 eth1 will be given 179.xxx.139.225/29 And I will have the following in the /etc/sysctl.conf: net.ipv4.ip_forward=1 These will be iptables rules: iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT My clients which have the ips 179.xxx.139.226/29 and 179.xxx.139.227/29 will be made to use 179.xxx.139.225/29 as gateway. Will this configuration work for me? Any comments? If it works, what iptables rules can I use to have a bit of security? P.S. Both networks are non-private and there is no NATing.

    Read the article

  • How to configure my 404 response

    - by Evylent
    How would I be able to correctly redirect a person who visits my site to my 404 page? I have already created my 404.php file as: <!DOCTYPE html> <html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Page not found | Twilight of Spirits</title> <link rel="stylesheet" href="http://forum.umbradora.net/template/default/css/404.css"> <link rel="icon" type="image/x-icon" href="/favicon.png"> </head> <body> <div id="error"> <a href="http://forum.umbradora.net/"> <img src="/forum/template/default/images/layout/404.png" alt="404 page not found" id="error404-image"> </a> </div> <div id="mixpanel" style="visibility: hidden; "></div></body></html> My .htaccess file is: ErrorDocument 404 http://forum.umbradora.net/404.php Now when I go to my site and enter a false link such as mack.php or total.html, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Any ideas on how to solve this? I have tried switching from subdomain to my normal path, still get errors.

    Read the article

  • Onboard Ethernet suddenly stopped working to router

    - by AfterschoolHobbist
    Hey guys, yesterday I have a sudden problem of my Ethernet connection to my router. My computer is a Pendium 4 with Window XP SP3 running on it and it was working fine earlier in the day, but yesterday night was the start of the problem. My computer is unable to ping to the router and to any other website and unable to get internet connection. As beside it was connected to a hub, I directly connected it to the router directly and the same problem occurred, being unable to ping to the router (and connecting to it) and still no internet access. My directly connected to the modem and the problem still persisted. For each connection, I connected it to my computer and my laptop and my laptop was able to connect, while my desktop computer was unable. I wondered if it was a OS issue and ran a live Ubuntu CD to see if Ubuntu was able to connect to the internet, but the issue persisted and I was unable to get internet access. I then set my router's lease time to 1 hour and waited. After 1 hour, the lease for my computer was removed and I hoped this worked, but it didn't work, but something strange is acting up. My desktop computer is still unable to ping to the router or connect to the internet, but for some reason, my router and desktop computer is still able to contact each other by providing a lease of an local ip address. The router record of a lease to my desktop computer, and when I do ipconfig, my desktop also recognize that it has been provided a local ip address. I have concluded that this is a hardware issue and the only solution to fix this is to by a network card adapter, but I am wondering if anyone has any solutions that could explain why this happen, why my mac address is 01-23-45-67-89-ab, and is there any way to fix it without buying a new network card? Thanks in advance.

    Read the article

  • Intel HD Graphics vs NVIDIA Quadro FX 380 PCI-E

    - by Michael
    I recently purchased an Acer Veriton which has an i5-650 processor, Windows 7 Pro (64 bit) and Intel HD Graphics listed as the video card. I also purchased a PNY nVIDIA Quadro FX 380 PCI-E card for improved picture and home video viewing and editing. I have already replaced the original 300 wattt power supply to a 430 watt Antec Truepower I had on hand and boosted the RAM to 8 gigs from the original 4. Question 1) Am I getting any improvement in visual quality or system speed with the Quadro or is it a waste of money and I should just save up to buy a bigger video card? This card was on sale for $115. If I am getting improvement then I need to ask another question. Question 2) Instructions for the Quadro installation are as follows... 1--Uninstall the existing VGA driver. -Remove the existing Display Driver via "Add or Remove Porgrams". -Shut down your computer. 2--Remove your Existing Graphics Board (or Disable Integrated 3D Graphics Controller). skipping instructions on how to remove existing graphics board -Systems with integrated (also know as on-board) 3D graphics may require you to disable the integrated 3D graphics system. Consult the owners or vendor manual that came with your PC on how to properly do this. So is the Intel HD Graphics considered a 3D graphics controller? If so should I just contact Acer or can anyone give me instructions? Thanks in advance for any help.

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • My nameserver isn't registered?

    - by jflory7
    My problem is that I am trying to set my domain's nameservers to the nameservers of my dedicated server. My domain is hosted by Namecheap, and everytime I try to input the two nameservers for my private server, one of them is rejected for being unregistered. My dedicated server's control panel is managed through Parallels Plesk 11.5, and the nameservers provided to me are one from the actual provider, OVH (sdns1.ovh.ca), and the other is an actual unique nameserver that points directly to my specific dedicated server. Previously, for another domain I own, I was successfully able to get Namecheap to take the nameserver without an error, so I know this is possible. I know that it works for one of my other domains. After being redirected by Namecheap to contact the server provider, I called OVH and they said it was something I would have to do myself. One interesting detail the OVH representative mentioned was that he saw that my port 53 was closed, which is the port that handles DNS. The only problem is that I have no idea or knowledge as how to open this port back up. So, my final question is how can I get this nameserver working in Namecheap to point to my dedicated server? If you need any more details, feel free to ask for clarification.

    Read the article

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • Scripted redirection for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • (Windows 7) Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

  • Windows 8 "Upgrade Offer" eligibilty when running the Consumer Preview in a VM?

    - by Dan Harris
    If I have a VM running Windows 8 Consumer/Release Preview, am I allowed to take advantage of the Windows 8 upgrade offer, and install it on that machine? I would have assumed not...as there was never a licensed version of XP SP3 through to Windows 7 installed in that VM. It was a clean installation of the Consumer Preview into a VM. My confusion comes from the notes at the bottom of the download page for the Upgrade offer which states: Offer valid from October 26, 2012 until January 31, 2013 and is for individuals and small businesses needing to upgrade up to five devices. If you are a business customer looking to upgrade more than five devices to Windows 8 Pro, contact your Microsoft partner for more information. To install Windows 8 Pro, customers must be running Windows XP SP3, Windows Vista, Windows 7, Windows 8 Consumer Preview, or Windows 8 Release Preview. I am assuming it's not possible and i'll need to purchase the System Builder edition to install within a VM? My guess is that you can use your downloaded upgrade offer only if you updated Windows 7 to the release preview, and therefore had the Windows 7 license on the machine, I used the serial number from the Microsoft Website when downloading the Release Preview, and did a clean install, so there was never a Windows 7 license on the VM. I have MSDN for development purposes, but I am looking to run in a VM for personal use as well, so my MSDN license is not valid for that particular use.

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

  • TCP/UDP hole punching from and to the same NAT network

    - by Luc
    I was wondering if tcp/udp hole punching would still work when you are in the same network (behind a NAT), and what the packet's path would be. What happens when using hole punching on the same network, is that it will send a packet out with the same destination and source address. Only the source and destination port would differ. I imagine a router with NAT loopback enabled will handle this as it should, but how about other routers? Would they drop the packet, or would a router (the first?) from the ISP bounce the packet back after which it gets handled okay? I'm wondering because I was thinking about using this technique to circumvent a block between peers in a network (like a school network where clients can only access the internet, but any contact with each other is blocked). The only other option is to use a man in the middle as proxy (tunnel?). The disadvantage of this is that you have to have a server with significantly more bandwidth than one that would only do hole punching. Also the latency would increase significantly.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >