Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 135/163 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • Attempting to set up xampp and zend server on the same machine

    - by umbregachoong
    I am attempting to set up the zend server and xampp on the same machine but I am running into problems. I came across documentation on the zend site that said you cannot do this. However the folks over at apachefriends said you can. I have since discovered that I can run some of the zendframework examples within xampp by downloading the zendframework2 library and the skeleton app from git and I am doing this right now. However, I would like to know how to set them both up without having any conflicts both for the apache2 server and phpmyadmin. (One of the frustrating things is trying to load phpmyadmin in the deployment dialog by using the zpk tool in Zend). What I did in trying to set up both servers on windows 7 is as follows: First I have tried to set up the httpd conf files separately for each server, xampp running on port 8082 , and zend running on port 8088. At the time xampp would work, but zend server would not. This is after setting up the virtual host files separately for each server. Question 1: Where are the zend server error logs? Earlier, I was able to get both of them running configuring the xampp server httpd-conf file alone, however, I experienced problems with phpmyadmin even after configuring phpmyadmin on xampp to work on a different port other than 3306. Second question here: how to set up the two mysql phpmyadmin instances so they do not conflict with each other? Here is the xampp virtual host section: ##ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/" ServerName localhost 8082 ##ServerAlias www.dummy-host.example.com ##ErrorLog "logs/dummy-host.example.com-error.log" ##CustomLog "logs/dummy-host.example.com-access.log" common Here is the zend virtual host section: DocumentRoot "C:\Program Files (x86)\Zend\Apache2/htdocs" ServerName localhost:8088 </VirtualHost> I have looked at this httpd.apache.org/docs/2.2/vhosts/ and this http://survivethedeepend.com/zendframeworkbook/en/1.0/creating.a.local.domain.using.apache.virtual.hosts but I am obviously doing something wrong here. I also have the java sdk running on this machine with tomcat and apache and I have no conflicts- too bad this is not the case for zend server and xampp Thanks umbre gachoong

    Read the article

  • Parallels Plesk returning strange numbers

    - by Jack W-H
    Hi everyone, As a relatively new Server Admin I've become a bit confused by some statistics Parallels Plesk Panel 10.0.1 is returning to me. I have a domain ('subscription') set up, mysite.com. Mysite.com only hosts files, mostly images Its file contents use up about 390MB of disk space Here's a screenshot: this is what Plesk is reporting mysite.com to use: And some more info: Now this is pretty confusing... I thought at first my site might have been hacked and had contents written to disk, but I checked and all is in order, nothing has been hacked into as far as I can tell. So I had a look in the site's CP for some more in-depth statistics, and this is what's returned... Now - sod's law - when I go to check my disk space statistics in more depth via the control panel, this morning it says "The data were not collected yet." - not too sure what that means, but, last night when I checked it was reporting something odd. It said Files were using up 390MB, but 1.80GB or so were being used up by 'Mail Accounts'. This is really strange, as there are no mail accounts set up for the domain. The only hint of 'mail' there is, is the catchall set up to forward *@mysite.com to a separate, ISP-hosted email account. Any ideas anybody? I can post more details if you need it. Sorry to be a bit vague but I'm not sure what else I can post. Thanks, Jack

    Read the article

  • Outlook won't re-connect to exchange after network is re-connected

    - by stan503
    I have a setup at my desk where I connect my computer to a an RJ45 switch that switches between two networks. One network is the corporate network, which is maintained by my company's IT, and the other is my own private network where I do testing (the two networks have to be separated). The corporate network hosts the exchange server where I get e-mail. When I switch from the private network to the corporate network, I expect Outlook to re-connect to the exchange server. However, I have found that sometimes when I come back, Outlook take an extremely long time to re-connect. Send/Receive will give me back the error 'The server is not available' (0x8004011D). It will sit there for 10 minutes to a few hours before it finally re-connects. The only other option is to reboot my computer, which is a huge pain for me since I run multiple VMs on it. This usually happens when I'm connected to the private network for a significant amount of time, so I'm thinking it's because Outlook has cached the network status. Is there a way to force Outlook to do a 'hard' re-connect to the exchange server? I'm using Windows XP SP 3 with Outlook 2007.

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • SQL SERVER – Importance of User Without Login

    - by pinaldave
    Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting. Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Answer: Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application. In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”. Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions. If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents. I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Is this a solution for having multiple SSL certificates on the same IP

    - by Saif Bechan
    I am running CentOS running on a VPS. I read some guides on having multiple SSL certificates on the same system, but I can not get the basics to work. The guide I got that makes the most sense to me is the doing the following. In CentOS I can make virtual NIC's. So I made 2 virtual NIC's to start with. 192.168.10.1, 192.168.10.2. Now I work in ISP manager Pro, so this is listening on my primary ip 1.1.1.1 For each website I have them listening on 192.168.10.1:80, 192.168.10.1:443 In the hosts file I made the following 2 entries 192.168.10.1 1st.com 192.168.10.2 2nd.com Now the strange thing is that when I browser to 1st.com I do not get the website located at 192.168.10.1, I get the website located at my prim IP 1.1.1.1 Should I do something like forwarding or routing for this setup to work? And the basic question: Will this setup even work? Are the SSL certificates based on the IP adress, or are the based on the host name, 1st.com and 2nd.com.

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Time Drift on VM servers, need a reliable solution

    - by zeroasterisk
    We have some windows server 2008 VMware instances on multiple physical servers (hosted) and an application which requires the time to be synced across the server instances. Obviously, VMware has problems with this and we really have never gotten it working any better, we have setup the servers to poll for an NTP update every minute which mitigates the problem (in a fairly crude way). Except that every once in a while, the update will fail (because there's already too much drift) and then windows never does an NTP update afterwards which eventually allows the servers to drift far enough apart that our application breaks, and we notice. We are thinking about changing hosts to Xen servers on approximately the same setup, and I anticipate similar problems. can anyone tell me if Xen has the same time-drift issues VMware does, for guests? can anyone tell me what the best windows server settings are for syncing with an external NTP server to keep things in sync: how frequently do you recommend syncing? (assuming every minute) do you recommend running our own NTP server - even if it has to be on a virtual instance? (assuming not) is there any way to tell windows to sync with the NTP server no matter what the time difference is? any other suggestions for keeping windows servers time in sync? I have become familiar with [ http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 ] and it's helped, but it's not been totally effective (see above). thanks much!

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to 127.0.0.1, and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else.

    Read the article

  • pfSense Load Balancer and Virtual IP

    - by jshin47
    I have two identical web servers on 10.2.1.13 and 10.2.1.113. I would like to set up pfSense load balancer to balance requests to both of these. I set up pools that included HTTP and HTTPS for both of these hosts, then set up virtual servers that responded on HTTP and HTTPS and referred traffic to its respective pool. However, I set up the virtual server to listen on 10.2.1.213, a LAN IP rather than a WAN IP, because I want LAN traffic to be able use the load balancer virtual server as well. So, I set up a Virtual IP for 10.2.1.213 on LAN IP, and a NAT port forwarding rule for HTTP and HTTPS traffic on a WAN IP to forward to 10.2.1.213. It seems like this should work, but it fails. What eventually happens is that when I try to access the page from WAN, I am directed to the login page for my pfSense device rather than the page I am expecting. When I try to access 10.2.1.213 from LAN, the request times out. What is going wrong here? I have tried it with and without NAT reflection to no avail. Please advise

    Read the article

  • postfix smtp_fallback_relay for deferred messages to a single domain

    - by EdwardTeach
    I use Postfix to send messages to a mail server outside my organization which frequently rejects/defers my mail. My Postfix server sees that these messages are deferred and tries again, eventually getting through. Final delivery can take up to an hour, which makes my users unhappy. In comparison, mail from my Postfix server to other hosts works normally. I have now found out about a second, unofficial MX for this domain that does not reject/defer mail. This second MX does not appear when doing a DNS MX query for the domain. Therefore, for the problem domain I would like to use this second MX as a fallback. That is: whenever mail is deferred by the primary MX, try again on the unofficial second MX. I see that there is already a postfix configuration "smtp_fallback_relay". However the documentation seems to indicate that I can not restrict usage of the fallback to a single domain. The documentation also doesn't mention deferred message handling. So is there a way to configure a single-domain, deferred-retry fallback host in Postfix? For reference, I am including my postconf output (the host names and ip addresses are fake): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/etc/postfix/legacy_mailman, ldap:/etc/postfix/ldap-aliases.cf append_dot_mydomain = no biff = no config_directory = /etc/postfix default_destination_concurrency_limit = 2 inet_interfaces = all inet_protocols = all local_destination_concurrency_limit = 2 local_recipient_maps = $alias_maps mailbox_size_limit = 0 mydestination = myhost.my.network, localhost.my.network, localhost, my.network myhostname = myhost.my.network mynetworks = 127.0.0.0/8, [::ffff:127.0.0.0]/104, [::1]/128, 10.10.10.0/24 myorigin = my.network readme_directory = no recipient_delimiter = + relay_domains = $mydestination relayhost = smtp_fallback_relay = the.problem.host smtp_header_checks = smtpd_banner = $myhostname ESMTP $mail_name virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • smtpd_helo_restrictions = ..., reject_unknown_helo_hostname occasionally rejects mail I care about, how to handle?

    - by lkraav
    I have configured my postfix as follows: smtpd_helo_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unknown_helo_hostname This is working well because most spambots don't seem to have correct reverse lookups. But every once in a while I run into mail I care about getting reject, because the mail source server admin doesn't care about configuring his server correctly. For example here the server introduces itself as "srv1.xbmc.org" which has no DNS record and fails my basic check. Jan 6 04:42:36 mail postfix/smtpd[660]: connect from xbmc.org[205.251.128.242] Jan 6 04:42:37 mail postfix/smtpd[660]: NOQUEUE: reject: RCPT from xbmc.org[205.251.128.242]: 450 4.7.1 <srv1.xbmc.org>: Helo command rejected: Host not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<srv1.xbmc.org> I have tried to contact the server admin several times, but there is no response. What is the optimal way to handle this from my side? Is adding these "special" hosts to mynetworks = my only option? Is perhaps my whole smtpd_helo_restrictions setup wrong in some significant way?

    Read the article

  • BIND master/slave does not respond for queries for its slave

    - by Savas
    Systems are all Centos 6.2 Lets say I have a masterdns with IP 10.2.1.2, authoritative for the 10.2.1.X subnet and let say it is domain example.com I have another two subnets, 10.2.2.X and 10.1.2.X Each one has its own DNS server, dns2 and dns1 respectively and let say these are domains dom2.example.com and dom2.example.com respectively. The masterdns server has slave zones for dns1, dns2 and respond to requests OK. The dns1, dns2 have the masterdns zones as slaves two, and respond to requests OK. So, the masterdns has as slave zones all the subordinate domains of example.com Each of dns1 and dns2 use masterdns as a forwards (which uses another dns cache/proxy server) for dns resolution of internet public domain names. It works OK that too. The problem is, and I cannot figure it out. Why queries for example at dns1 for hostnames of dom2.example.com do not resolve? If i use nslookup - masterdns at dns1 server, resolve (i use directly the dns facility of masterdns). If I use nslookup locally, meaning queries are sent to dns1, for hosts that are at dom2.example.com, they do not resolve. Everything other works OK.

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • Are there any tests I can run on a network to simulate 100 heavy network users?

    - by marc.gayle
    I will be hosting a Ruby on Rails workshop at a small hotel in the near future, and while they have 'Wifi' everywhere on the property, and the property normally hosts 150 - 300 people, I am not 100% confident that they have hosted 150 tech people that tend to have heavy web surfing habits/needs. Their tech department is also 1 or 2 guys. Are there any automated tests I can download and run from my laptop, on the network, that would simulate 100 'heavy users' on the network at the same time? Their broadband pipe is a 15mbps cable connection. Would that suffice for the general surfing needs of 100 - 150 techies? I know all it takes is 1 or 2 bit torrenters to kill the entire network, but assuming we can at the very least block those ports or encourage the attendees not to file share on the network, would that speed suffice for general surfing needs? What are good resources online that would allow me to quickly get up to speed on the IT related issues, so that I can ask their sysadmins the right questions? Edit: Note that I am fairly technical, so assume I can get up to speed quickly even with technical manuals, etc.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >