Search Results

Search found 4183 results on 168 pages for 'provider'.

Page 52/168 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • Remote SQL server connection failure

    - by Sevki
    I am trying to connect to my MSSQL server 2008 web instance and im failing horribly... i get the error 26 and before you jump on me i have done these Check the spelling of the SQL Server instance name that is specified in the connection string. Use the SQL Server Surface Area Configuration tool to enable SQL Server to accept remote connections over the TCP or named pipes protocols. For more information about the SQL Server Surface Area Configuration Tool, see Surface Area Configuration for Services and Connections. Make sure that you have configured the firewall on the server instance of SQL Server to open ports for SQL Server and the SQL Server Browser port (UDP 1434). Make sure that the SQL Server Browser service is started on the server. in addition to theese i have disabled the firewall completely and tried other ports nothing works the same credentials work on the server but not on the client. this is the exact error message A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) Can anybody help?

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my computer. First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs" 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • WS2008 NTP - Using time.windows.com,0x9 - Time always skewed forwards

    - by David
    I have a domain controller configured to use time.windows.com (with 0x09 flags set). I've noticed that frequently the systems' clock is fast - it varies from 10 minutes to even 45 minutes. I always have to keep resetting the system date/time back to what it should be. When I run "w32tm /query /source" it tells me it's using time.windows.com, and obviously I trust Microsoft not to serve incorrect times, but why is my server's clock fast? EDIT: There are a few Time-Service events in the System log: Event ID: 142 Message: The time service has stopped advertising as a time source because the local clock is not synchronized. Event ID: 139 Message: The time service has started advertising as a time source. These two messages appear in pairs every hour or so. Event 142 appears 14 to 16 minutes after 139 appears. Going back a few months, these events appear: Event ID: 35 Message: The time service is now synchronizing the system time with the time source time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 37 Message: The time provider NtpClient is currently receiving valid time data from time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 47 Message: Time Provider NtpClient: No valid response has been received from manually configured peer time.windows.com,0x9 after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer with this DNS name. The error was: The time sample was rejected because: The peer is not synchronized, or it has been too long since the peer's last synchronization. These three events only appear once in the log, back in October.

    Read the article

  • SQL Server 2005 - Linked Visual Foxpro Authorization

    - by John
    Here's the Scenario: We have an existing SQL 2000 Server that has a linked server to a share directory (on another server) containing Visual FoxPro tables; all connections work correctly. Porting the SQL 2000 server to a new SQL 2005 server results in questionable behavior: If you connect to the server, remotely, using Windows Authentication, you receive this error when running a query against the linked server: OLE DB provider "MSDASQL" for linked server "[linked server name]" returned message "[Microsoft][ODBC Visual FoxPro Driver]File 'MyTable.dbf' does not exist.". Msg 7350, Level 16, State 2, Line 2 Cannot get the column information from OLE DB provider "MSDASQL" for linked server "[linked server name]". However, logged in locally, the query works fine. The query also works correctly when logged in remotely, but using a SQL login. The only scenario I receive the error is when connected remotely, using windows authentication. As I mentioned before, this works on the SQL 2000 server, and both the old and new servers are running under the same network account (which has access to the folder the FoxPro files are in). Doing a little searching on the internet it looks like others have run into this situation, but I haven't found a resolution. Has anyone run into this before?

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Problem with TL-R480T+ and static routes

    - by Globulopolis
    Hi! I've some question about this router. Before starting, some configurations, specified by my provider. Wan1 VPN IP - 192.168.172.84 Mask - 255.255.255.0 Gateway - 192.168.172.253 DNS - 195.110.6.7 Wan2 Dynamic IP DHCP - 168.120.1.34 Mask - 255.255.255.0 Router IP 192.168.1.1 Computer IP 192.168.1.7 Routes: route -p add 192.168.0.0 mask 255.255.0.0 192.168.172.253 route -p add 195.110.6.0 mask 255.255.254.0 192.168.172.253 route -p add 88.135.112.0 mask 255.255.240.0 192.168.172.253 route -p add 178.219.160.0 mask 255.255.240.0 192.168.172.253 For first provider I need to provide a routes. 'Cause router does not support different routes for different WAN interfaces I put them in "Static routes". But when I try to save them I've got an error: Destination IP address can not be set in a same subnet with the WAN or LAN IP address. If I change IP's to local like 192.168.x.x router tell me: Gateway must be set in a same subnet with WAN or LAN IP address. Changing mask on WAN1 interface to 255.255.0.0 doesn't help. Any ideas? PS! Or maybe I'm must email to TP-Link support?

    Read the article

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • Cisco QoS Guidence

    - by Kyle Brandt
    I have a 10M connection to the internet that is hooked into a 100M port. I am getting started with QoS, and am hopping for a little guidance on setting it up on a Cisco 3825 router. Right now I am going forward with the idea that I have to implement it on my router, and the provider can't provide QoS for me. How I envision it working is that the QoS will drop or queue packets on my router and that will help prevent a situation where the provider has to start dropping a lot of packets. Right now all I am tasked with is making sure that one of the 3 LANs gets a certain slice (say 3M for Gig Lan1) of the 10M internet connection (But ideally this will be more flexible in the Future). 10M Internet on 100M port on HWIC-4ESW +-----------------------+ | | Gig Lan1 | Cisco 3825 | Lan3 on HWIC-4ESW | | +-----------------------+ Gig Lan2 I need to learn more about QoS, but having a target technology and maybe example configuration will help me wrap my head around the reading I am doing a little more. Which Cisco QoS Technology do you recommend for this particular situation? Have a basic sample config of how this might work? Right now the 10M line is not congested, so this more to have something in place in case it starts to become mildly congested in the future.

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • Monitoring / metric collection for system collectives that change a lot in time (a.k.a. cloud)

    - by Florin Andrei
    When your server fleet doesn't change a lot in time, like when you're using bare-metal hosting, classic monitoring and metric collection solutions (Nagios, Munin) work well. But if the number of systems varies a lot in time, and may in fact vary rapidly, classic software is more difficult to setup and use. E.g., trying to make Nagios (monitoring) keep up with a rapidly evolving cloud infrastructure can be cumbersome. Same for Munin (metric collection). It's not just the configuration, but the way the information is conveyed to the user, or displayed, is inadequate for the cloud. What are some possible alternatives that work well with the cloud? The goals are to collect and display metrics (analog to Munin), and generate alerts when certain metrics go out of bounds or when certain services are unavailable (analog to Nagios), and do everything in a cloud-friendly manner. Some cloud providers offer monitoring / metric collection as services, but not always, and if you use more than one provider you don't want to become too dependent of just one vendor. So provider-independent solutions are required. EDIT: I am asking this question in a general fashion - not limited to any given cloud infrastructure (like OpenStack), but in the general case of using arbitrary cloud providers.

    Read the article

  • Cisco QoS Guidance

    - by Kyle Brandt
    I have a 10M connection to the internet that is hooked into a 100M port. I am getting started with QoS, and am hopping for a little guidance on setting it up on a Cisco 3825 router. Right now I am going forward with the idea that I have to implement it on my router, and the provider can't provide QoS for me. How I envision it working is that the QoS will drop or queue packets on my router and that will help prevent a situation where the provider has to start dropping a lot of packets. Right now all I am tasked with is making sure that one of the 3 LANs gets a certain slice (say 3M for Gig Lan1) of the 10M internet connection (But ideally this will be more flexible in the Future). 10M Internet on 100M port on HWIC-4ESW +-----------------------+ | | Gig Lan1 | Cisco 3825 | Lan3 on HWIC-4ESW | | +-----------------------+ Gig Lan2 I need to learn more about QoS, but having a target technology and maybe example configuration will help me wrap my head around the reading I am doing a little more. Which Cisco QoS Technology do you recommend for this particular situation? Have a basic sample config of how this might work? Right now the 10M line is not congested, so this more to have something in place in case it starts to become mildly congested in the future. I do have VOIP at one location connected to this one over the Internet that goes through a VPN tunnel. Everything else that is between this location and other offices is on a separate MPLS network.

    Read the article

  • Directory directive: AuthType None but still need an AuthProvider?

    - by Steffen Winkler
    For now I just need the server to let me download files from one specific folder (in my case I chose /opt/myFolder for that task) Distribution is Debian 6.0 *edit_start* Apache version is 2.4, according to their official documentation, the Order/Allow clauses are deprecated and should not be used anymore I'm an idiot: Apache version is 2.2. *edit_end* My directory directives in apache2.conf look like this: <IfModule dir_module> DirectoryIndex index.html index.htm index.php </IfModule> ServerRoot "/etc/apache2" DocumentRoot "/opt/myFolder" <Directory /> Options FollowSymLinks AuthType None AllowOverride None Require all denie </Directory> <Directory "/opt/myFolder/*"> Options FollowSymLinks MultiViews AllowOverride None AuthType None Require all allow </Directory> When I try to access a file inside that folder (http://myserver.de/aTestFile.zip) I get an Internal Server Error. Also Apache writes the following error into it's log: configuration error: couldn't check user. Check your authn provider!: /aTestFile.zip Why would I need an authn provider if I don't want any authentication? Also I hope someone can explain to me what kind of AuthenticationProvider I'd need for that. Everytime I search for those things I get pointed at people asking how to protect files/directories with passwords or restrict access to some IP addresses, which doesn't really help me. ok, since I've Apache version 2.2, here is the error I get when using the Order/Deny/Allow commands instead of AuthType/Require: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration.

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • Prevent Exchange Server from advertising itself on domain

    - by Justin Shin
    I'm in the middle of setting up an Exchange 2010 Server. Currently, we use a SaaS provider for Exchange 2007 services. Some (but not all) of my users have been reporting that they are receiving Outlook/Exchange login prompts to login to the new Exchange server. This is happening without any intervention on the client's machines. The Exchange server is a member of the domain and connects to the domain site remotely through a site-to-site VPN. What can I do to prevent these login prompts from appearing? Will shutting down the new server until it is time to switch resolve these issues? A little more info: I found that on one of the client computers, all of the settings for Outlook over HTTP had been changed (automatically) from webmail.provider.com to mail.company.com (the latter being the new server). This happened when I enabled Outlook Anywhere access on Exchange 2010. I changed the client's settings back, and everything was groovy. But, when I disabled Outlook Anywhere again, the logon prompt came back.

    Read the article

  • Persistent routes for DD-WRT PPTP VPN client

    - by Tim Kemp
    My home network in the USA is behind a Buffalo router (G300NH) running their version of DD-WRT. I use the built-in PPTP VPN client to connect to a VPN provider in the UK. I route certain traffic over the VPN (so it has a UK source address, for various entirely legal reasons) which I achieved by following the instructions in the DD-WRT docs and my VPN provider's own instructions. I placed two commands like this in the firewall script: route add -net xxx.xxx.0.0 netmask 255.255.0.0 dev ppp0 route add -net yyy.yyy.0.0 netmask 255.255.0.0 dev ppp0 I didn't put any of the iptables rules in since it my setup doesn't seem to need them. It works like a charm. Traffic to the xxx subnets goes over the VPN, everything else goes out over my ISPs own pipes. The problem comes when the VPN drops, which it does occasionally. DD-WRT does a fine job of reconnecting it automatically, but the routes are trashed every time that happens. How do I automate the process of re-establishing my routes? I thought about static routes, but the IP address of the VPN connection is dynamically assigned (which is why I'm using dev ppp0). Many thanks, Tim

    Read the article

  • Debian DNSSEC - howto secure a domain?

    - by Daniel Marschall
    I have a beginner question about DNSSEC. I have much experience with TLS and cryptography-stuff and would like to try out this new technology. I have googled very much about this but I haven't found useful information for me. I think one confusion in information gathering is that "Debian howto DNSSEC setup" can mean "How to USE DNSSEC for resolving" OR "How to secure your domain with DNSSEC". I am searching the second. I am running a Debian Squeeze server with root privileges which has a domain name ending with ".de" (which is already signed by the root zone). The network interface at this server uses the gateway IP (DNS resolver?) of the datacentre the server is running on. My domain is hosted at freedns.afraid.org , where I can add DNS RRs for my domain. They are currently NOT capable of adding DNSSEC RRs, but I am bugging them to support this soon. ;-) My simple question is: How do I setup DNSSEC on Debian? Resp. who have I ask to? As far as I understand, all I have to do is to run dnssec-keygen on my Debian server and then add the key to my DNS-provider as DNSSEC RR. (And change it every 30 days?) I have looked at this http://www.isc.org/files/DNSSEC_in_6_minutes.pdf but it looks like you have to be the owner of a ZONE, so I don't think this applies to me. Who needs to sign my domain? My DNS-provider or my zone (DeNIC) or can I do it myself? Any help is very appreciated!

    Read the article

  • Postfix with relayhost - relay access denied for bounces

    - by Alex
    I have set up a Postfix Mailserver, outgoing mail is being sent through a smarthost/relayhost which requires authentification. That works great, internal clients can send to foreign recipients though this relayhost. However, when an external mail for a local, non-existent user arrives at the server, postfix tries to send a non-delivery notification to the sender. This mail is also sent through the relayhost obviously, but it fails with error 554 5.7.1 : Relay access denied This gets logged to the mail.log: Nov 9 10:26:42 mail postfix/local[5051]: 6568CC1383: to=<[email protected]>, relay=local, delay=0.13, delays=0.02/0.02/0/0.09, dsn=5.1.1, status=bounced (unknown user: "test") Nov 9 10:26:42 mail postfix/cleanup[5045]: 85DF9BFECD: message-id=<[email protected]> Nov 9 10:26:42 mail postfix/qmgr[4912]: 85DF9BFECD: from=<>, size=3066, nrcpt=1 (queue active) Nov 9 10:26:42 mail postfix/bounce[5052]: 6568CC1383: sender non-delivery notification: 85DF9BFECD Nov 9 10:26:42 mail postfix/qmgr[4912]: 6568CC1383: removed Nov 9 10:26:43 mail postfix/smtp[5053]: 85DF9BFECD: to=<[email protected]>, relay=mail.provider.com[168.84.25.111]:587, delay=0.48, delays=0.02/0.01/0.26/0.18, dsn=5.7.1, status=bounced (host mail.provider.com[168.84.25.111] said: 554 5.7.1 <[email protected]>: Relay access denied (in reply to RCPT TO command)) Nov 9 10:26:43 mail postfix/qmgr[4912]: 85DF9BFECD: removed According to this error, I suppose that postfix does not login at the relayhost when sending those bounces. Why? Normal outgoing mail works just fine. This is how my main.cf looks like: http://pastebin.com/Uu1Dryxy And of course /etc/postfix/sasl_password contains the correct credentials for the relayhost. Thanks in advance!

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I'm not sure if this is a good place to ask this question. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

  • How to set up port forwarding on a dedicated server running CentOS 5.4 to use Ubuntu 9.0.4

    - by mairtinh
    The basic situation that I have is a dedicated server running CentOS 5.4 At the moment I have one VM running Ubuntu 9.0.4. Later on, I will want to add another VM running Windows Server 2003 but at the moment I am focusing on getting Ubuntu up and running. The Ubuntu installation is working fine but I'm seriously struggling to get port forwarding working so that I can access websites to be hosted on the Ubuntu VM. As a newbie to Linux, I am confused about the relationship between IPTables and VMWare's own port forwarding. Here's what I've tried so far. The IP of my server is xxx.xxx.xxx.xxx and the provider support have told me that the subnet mask is 255.255.255.0, the gateway address is xxx.xxx.xxx.1 and the network address is xxx.xxx.xxx.0. (Those latter two surprise me a bit, I expected private gateway/network address rather than public ones.) First of all I tried Bridged Networking but had no success at all in communicating with the machine other than through the VMware console. I tried pinging it from the host (using ssh into the host) but no joy; also no Inernet access from the VM. I changed the interfaces configuration from DHCP to Static, using a static address of 192.168.1.100 and setting the gateway to xxx.xxx.xxx.1 as advised by the provider. No real difference, still cannot ping the guest from the host or vice versa and no Internet access from the guest. Then I tried NAT. The host automatically set the IP address to 192.168.132.128 with a gateway of 192.168.132.2 Now the guest has Internet access out and when I do a VNC to the host and open Firefox with 192.168.132.128 I can see the hosted website okay but I still cannot get into it from outside. I mentioned that I'm a bit confused about IPtables and VMware port forwarding, what I meant is that I'm not sure whether IPtable forwarding should be set to the IP address of the guest interface (192.168.132.128 in this case) or the gateway address 192.168.132.2 . I have a feeling that I'm missing something very simple here, can anybody tell me what it is?

    Read the article

  • Overriding some DNS entries in BIND for internal networks

    - by Remy Blank
    I have an internal network with a DNS server running BIND, connected to the internet through a single gateway. My domain "example.com" is managed by an external DNS provider. Some of the entries in that domain, say "host1.example.com" and "host2.example.com", as well as the top-level entry "example.com", point to the public IP address of the gateway. I would like hosts located on the internal network to resolve "host1.example.com", "host2.example.com" and "example.com" to internal IP addresses instead of that of the gateway. Other hosts like "otherhost.example.com" should still be resolved by the external DNS provider. I have succeeded in doing that for the host1 and host2 entries, by defining two single-entry zones in BIND for "host1.example.com" and "host2.example.com". However, if I add a zone for "example.com", all queries for that domain are resolved by my local DNS server, and e.g. querying "otherhost.example.com" results in an error. Is it possible to configure BIND to override only some entries of a domain, and to resolve the rest recursively?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >