Search Results

Search found 2745 results on 110 pages for 'hosts'.

Page 79/110 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Cisco ASA and SixXS IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • NetApp FAS 2040 LDAP Win2k8R2

    - by it_stuck
    I am trying to get my FAS2040 to action user lookups using LDAP, below is the filer configuration options: filer> options ldap ldap.ADdomain dc1.colour.domain.local ldap.base OU=Users,OU=something1,OU=something2,OU=darkside,DC=colour,DC=domain,DC=local ldap.base.group ldap.base.netgroup ldap.base.passwd ldap.enable on ldap.minimum_bind_level anonymous ldap.name domain-admin-account ldap.nssmap.attribute.gecos gecos ldap.nssmap.attribute.gidNumber gidNumber ldap.nssmap.attribute.groupname cn ldap.nssmap.attribute.homeDirectory homeDirectory ldap.nssmap.attribute.loginShell loginShell ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup ldap.nssmap.attribute.memberUid memberUid ldap.nssmap.attribute.netgroupname cn ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple ldap.nssmap.attribute.uid uid ldap.nssmap.attribute.uidNumber uidNumber ldap.nssmap.attribute.userPassword userPassword ldap.nssmap.objectClass.nisNetgroup nisNetgroup ldap.nssmap.objectClass.posixAccount posixAccount ldap.nssmap.objectClass.posixGroup posixGroup ldap.passwd ****** ldap.port 389 ldap.servers ldap.servers.preferred ldap.ssl.enable off ldap.timeout 20 ldap.usermap.attribute.unixaccount unixaccount ldap.usermap.attribute.windowsaccount sAMAccountName ldap.usermap.base ldap.usermap.enable on output of nsswitch.conf: hosts: files dns passwd: ldap files netgroup: ldap files group: ldap files shadow: files nis Error Message(s): [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for dc1.colour.domain.LOCAL. [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using DNS site query (site). [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using generic DNS query. Could not get passwd entry for name = <random user> the filer can ping the FQDN of dc1 the filer can ping the IP of dc1 the filer cannot ping "dc1" I'm not sure where I'm going wrong, so any pointers would be great.

    Read the article

  • error reading keytab file krb5.keytab

    - by Banjer
    I've noticed these kerberos keytab error messages on both SLES 11.2 and CentOS 6.3: sshd[31442]: pam_krb5[31442]: error reading keytab 'FILE: / etc/ krb5. keytab' /etc/krb5.keytab does not exist on our hosts, and from what I understand of the keytab file, we don't need it. Per this kerberos keytab introduction: A keytab is a file containing pairs of Kerberos principals and encrypted keys (these are derived from the Kerberos password). You can use this file to log into Kerberos without being prompted for a password. The most common personal use of keytab files is to allow scripts to authenticate to Kerberos without human interaction, or store a password in a plaintext file. This sounds like something we do not need and is perhaps better security-wise to not have it. How can I keep this error from popping up in our system logs? Here is my krb5.conf if its useful: banjer@myhost:~> cat /etc/krb5.conf # This file managed by Puppet # [libdefaults] default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC preferred_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_realm = FOO.EXAMPLE.COM dns_lookup_kdc = true clockskew = 300 [logging] default = SYSLOG:NOTICE:DAEMON kdc = FILE:/var/log/kdc.log kadmind = FILE:/var/log/kadmind.log [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false retain_after_close = false minimum_uid = 0 debug = false banner = "Enter your current" } Let me know if you need to see any other configs. Thanks. EDIT This message shows up in /var/log/secure whenever a non-root user logs in via SSH or the console. It seems to only occur with password-based authentication. If I do a key-based ssh to a server, I don't see the error. If I log in with root, I do not see the error. Our Linux servers authenticate against Active Directory, so its a hearty mix of PAM, samba, kerberos, and winbind that is used to authenticate a user.

    Read the article

  • Apache reverse-proxy intermittent error 113 - No route to host

    - by BonkaBonka
    I've got an Apache 2.0.52 server on CentOS 4 that front-ends a couple of App servers (mix of Jetty and Tomcat). Apache has a handful of virtual hosts configured like this: <VirtualHost www1.example.com:443> ServerName www1.example.com DocumentRoot "/mnt/app_web/html" SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP SSLCertificateFile /etc/httpd/conf/ssl.crt/server.crt SSLCertificateChainFile /etc/httpd/conf/ssl.crt/chain.crt SSLCertificateKeyFile /etc/httpd/conf/ssl.key/server.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 RewriteEngine on RewriteRule ^/app1/(.*)$ http://app1.example.com:8080/app1/$1 [P,L] RewriteRule ^/app2/(.*)$ http://app2.example.com:8080/app2/$1 [P,L] </VirtualHost> However, I'm getting the following errors in the logs intermittently: [Fri Dec 04 07:19:41 2009] [error] (113)No route to host: proxy: HTTP: attempt to connect to 10.0.0.1:8080 (app1.example.com) failed I initially tried turning off IPv6, and that seemed to largely cure it, but I still have sporadic bursts of these messages. Additionally, we're running memcache on same front-end and during the times when I'm getting those messages in Apache's log, the following command doesn't work: echo stats | nc 127.0.0.1 11211 No messages are printed, but neither are the stats printed. I am completely lost as to how to proceed with troubleshooting this. =(

    Read the article

  • How to manage sub-domains on WinHost with IIS7 URL Rewrite 2.0?

    - by jrummell
    I'm trying out WinHost and I'm running into some issues with sub-domains. On WinHost, you can have multiple sub-domains per hosting account, but each sub-domain points to the root website. E.g. you can have www.example.com, sub1.example.com, and sub2.example.com but all of them display the content at http://www.example.com/. Other Hosts allow you to point sub-domains to a sub folder in your website. This would allow you to point sub1.example.com to /sub1, sub2.example.com to /sub2 and www.example.com to /. WinHost recommends using an asp/aspx page to redirect http://sub1.example.com to http://sub1.example.com/sub1, which points to /sub1. While that would work, I'd like to not have the subdomain in the url twice. So I tried using IIS7 URL Rewrite to point http://sub1.example.com to /sub1. Ben Powell describes this in detail on his blog. This is great, except Request.ApplicationPath is now /sub1/path/to/current/page.aspx, which breaks ASP.Net Themes (and probably other stuff too). What can I do to fix the ApplicationPath? Is there a better way to accomplish this?

    Read the article

  • Host name or Domain not found

    - by hitesh-4259
    Hi I have installed amavis + postfix + spamassassin on centOS 5.4. The "/etc/hosts" file contains: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 67.215.65.132 mail.sufalamtech.local mail When I am sending mail then, the following error is occured: Apr 8 06:20:53 mail sendmail[3229]: o380oqu7003229: from=root, size=62, class=0, nrcpts=1, msgid=<[email protected], relay=root@localhost Apr 8 06:20:53 mail postfix/smtpd[3230]: connect from mail.sufalamtech.local[127.0.0.1] Apr 8 06:20:53 mail sendmail[3229]: STARTTLS=client, relay=[127.0.0.1], version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Apr 8 06:20:54 mail postfix/smtpd[3230]: 5A53C1A5989: client=mail.sufalamtech.local[127.0.0.1], [email protected] Apr 8 06:20:54 mail postfix/cleanup[3238]: 5A53C1A5989: message-id=<[email protected] Apr 8 06:20:54 mail sendmail[3229]: o380oqu7003229: [email protected], ctladdr=root (0/0), delay=00:00:02, xdelay=00:00:01, mailer=relay, pri=30062, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (Ok: queued as 5A53C1A5989) Apr 8 06:20:54 mail postfix/qmgr[3107]: 5A53C1A5989: from=, size=587, nrcpt=1 (queue active) Apr 8 06:20:54 mail postfix/smtpd[3230]: disconnect from mail.sufalamtech.local[127.0.0.1] Apr 8 06:20:54 mail postfix/smtp[3240]: 5A53C1A5989: to=, relay=none, delay=0.63, delays=0.17/0.1/0.36/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=mail.sufalamtech.local type=A: Host not found) Apr 8 06:20:54 mail postfix/cleanup[3238]: E73C51A5987: message-id=<[email protected] Apr 8 06:20:54 mail postfix/qmgr[3107]: E73C51A5987: from=<, size=2594, nrcpt=1 (queue active) Apr 8 06:20:54 mail postfix/bounce[3241]: 5A53C1A5989: sender non-delivery notification: E73C51A5987 Apr 8 06:20:54 mail postfix/qmgr[3107]: 5A53C1A5989: removed Apr 8 06:20:55 mail postfix/local[3242]: E73C51A5987: to=, relay=local, delay=0.15, delays=0.02/0.1/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) Apr 8 06:20:55 mail postfix/local[3242]: warning: host not found: localhost Apr 8 06:20:55 mail postfix/qmgr[3107]: E73C51A5987: removed Apr 8 06:21:04 mail postfix/qmgr[3107]: warning: connect to transport amavis: No such file or directory Apr 8 06:22:04 mail postfix/qmgr[3107]: warning: connect to transport amavis: No such file or directory

    Read the article

  • Mac OS X Server (10.5) mail trapped in queue

    - by Meltemi
    We've got mail accumulating in our Leopard Server's queue and not sure exactly why. This machine has required little maintenance over the years so I'm hoping someone here spot the obvious and save us some time. Let me know what other information would be helfull. Server appears to be functioning normally except for "clogged" queue and the following error associated with each "trapped" message: Looking at messages in the queue each one states something like this: Message ID: 4213C3B8B3F Date: October 27, 2009 11:33:27 AM Size: 1824 Sender: [email protected] Recipient(s) & Status: ---------------------- [email protected]: connect to 127.0.0.1[127.0.0.1]: Connection refused Under SettingsRelay we have checked Accept SMTP relays only from these hosts and networks: 127.0.0.0/8 10.0.1.0/24 The mail in queue is addressed to users whose accounts are on this server. Mail.app on the client appears to be functioning normally and checking checking mail on the server. We did add a virtual domain some time ago but all that was working fine for some time... This just started happening recently...any ideas? Edit: toggling the filter services on and off seems to have fixed this except for 2 remaining queued messages that show "mail transport unavailable" as an error!?!

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to the NIC's IP (192.168...), and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else. EDIT: The PC I'm having the issue on is running Win 7 Home Premium. After more testing I actually have two other PCs that work, one W7HP, one XP Home, and another Vista PC that doesn't work (not tested as much as the others), all four on the same internet connection (behind the same router). All of them were tested with a straight-forward, all defaults, new VPN configuration.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Debian Apache2 and SSL

    - by Topher Fangio
    Hello all, I recently took over a server that is using Apache2 with SSL. I have setup a new server to which I am migrating all of the old websites so that we can more easily scale (it's a cloud server) and so that I can set everything up correctly (or at least with some sort of convention). I have read quite a few articles on setting up Apache2 and SSL with virtual hosts, but I'm a bit confused because all of the examples show three files and I only seem to have two. To compound the problem, they are all named differently (do the file extensions actually make a difference?). The examples show something to this effect: <VirtualHost X.X.X.X:443> ServerAlias something.mydomain.com ServerAdmin [email protected] DocumentRoot /var/www/project/client/site SSLEngine on SSLCertificateFile /etc/ssl/certs/mydomain-cert.pem SSLCertificateKeyFile /etc/ssl/private/mydomain-key.pem SSLCertificateChainFile /etc/ssl/certs/mydomain-ca.crt </VirtualHost> However, the files I have are: _.mydomain.com.crt gd_bundle.crt It is a wildcard certificate that we purchased through GoDaddy I believe. I believe that the first file is the actual certificate file and the gd_bundle.crt is the chain file, but that leaves me without a key file. There is also a random mydomain.csr file lying around on the old server, but it wasn't one of the files bundled with the download from GoDaddy, so I'm not really sure as to what it is. Any help in figuring out what I need to do would be greatly appreciated. I am software developer, so I know my way around computers, but I have only dabbled in server setup/maintenance. Much Thanks!

    Read the article

  • NetApp FAS 2040 LDAP Win2k8R2

    - by it_stuck
    I am trying to get my FAS2040 to action user lookups using LDAP, below is the filer configuration options: filer> options ldap ldap.ADdomain dc1.colour.domain.local ldap.base OU=Users,OU=something1,OU=something2,OU=darkside,DC=colour,DC=domain,DC=local ldap.base.group ldap.base.netgroup ldap.base.passwd ldap.enable on ldap.minimum_bind_level anonymous ldap.name domain-admin-account ldap.nssmap.attribute.gecos gecos ldap.nssmap.attribute.gidNumber gidNumber ldap.nssmap.attribute.groupname cn ldap.nssmap.attribute.homeDirectory homeDirectory ldap.nssmap.attribute.loginShell loginShell ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup ldap.nssmap.attribute.memberUid memberUid ldap.nssmap.attribute.netgroupname cn ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple ldap.nssmap.attribute.uid uid ldap.nssmap.attribute.uidNumber uidNumber ldap.nssmap.attribute.userPassword userPassword ldap.nssmap.objectClass.nisNetgroup nisNetgroup ldap.nssmap.objectClass.posixAccount posixAccount ldap.nssmap.objectClass.posixGroup posixGroup ldap.passwd ****** ldap.port 389 ldap.servers ldap.servers.preferred ldap.ssl.enable off ldap.timeout 20 ldap.usermap.attribute.unixaccount unixaccount ldap.usermap.attribute.windowsaccount sAMAccountName ldap.usermap.base ldap.usermap.enable on output of nsswitch.conf: hosts: files dns passwd: ldap files netgroup: ldap files group: ldap files shadow: files nis Error Message(s): [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for dc1.colour.domain.LOCAL. [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using DNS site query (site). [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using generic DNS query. Could not get passwd entry for name = <random user> the filer can ping the FQDN of dc1 the filer can ping the IP of dc1 the filer cannot ping "dc1" I'm not sure where I'm going wrong, so any pointers would be great.

    Read the article

  • How can I get HTTPD to serve the html/php files and not list/index them when they are in folder for virtual host. Using Centos 6.0

    - by LaserBeak
    My virtual hosts are configured as below, initally I could not even get to the /public_html/ directory when typing example.com and apache would just serve me up the default welcome page, I would also get the error: Directory index forbidden by Options directive: /var/www/html/example.com/public_html/ in the log . After editing the welcome.conf page (- Index) so it does not show again when I now type example.com the/public_html/ contents (Index.php) are indexed in the browser. Where as I want it to actually execute and diplay the index.php page. vhost.conf , located in etc/httpd/vhost.d/ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName localhost ServerAlias localhost.example.com DocumentRoot /var/www/html/example.com/public_html/ ErrorLog /var/www/html/example.com/logs/error.log CustomLog /var/www/html/example.com/logs/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName example.org ServerAlias www.example.org DocumentRoot /var/www/html/example.org/public_html/ ErrorLog /var/www/html/example.org/logs/error.log CustomLog /var/www/html/example.org/logs/access.log combined </VirtualHost> httpd.conf, settings on default, added onto end: Include /etc/httpd/vhosts.d/*.conf Root directories: DocumentRoot "/var/www/html"

    Read the article

  • iSCSI errors continue after removing inaccessible target portal

    - by Ansgar Wiechers
    By mistake I entered an iSCSI target portal address in the iSCSI Initiator on one of our virtual servers that does not have an address in the network range used for iSCSI. This caused the following errors/warnings to appear in the eventlog: Log Name: System Source: MSiSCSI Event ID: 113 Level: Warning Description: iSCSI discovery via SendTargets failed with error code 0xefff0003 to target portal *192.168.23.42 0003260 Root\ISCSIPRT\0000_0 . Log Name: System Source: iScsiPrt Event ID: 1 Level: Error Description: Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data. Log Name: System Source: iScsiPrt Event ID: 70 Level: Error Description: Error occurred when processing iSCSI logon request. The request was not retried. Error status is given in the dump data. So far that's expected beahvior, so I removed the portal from the iSCSI Initiator as described in MSKB 976072. However, the errors/warnings keep appearing every hour, even though neither iSCSI Initiator GUI nor iscscli show any portals: C:\>iscsicli ListTargetPortals Microsoft iSCSI Initiator Version 6.1 Build 7601 The operation completed successfully. The problem persists after rebooting the server. Uninstalling the Microsoft iSCSI Initiator device via devmgmt.msc as well as changing the Initiator parameters like this: [HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}] "MaxPendingRequests"=dword:00000001 "MaxConnectionRetries"=dword:00000001 "MaxRequestHoldTime"=dword:00000005 didn't help either. Each change was followed by a reboot. Disabling the device does prevent the errors/warnings from re-appearing, of course, but I'd rather not have to resort to this. How can I prevent those errors and warnings from appearing (short of disabling the initiator device or re-installing the server)? What am I missing? Environment: The virtual machine runs on a Hyper-V cluster managed by SCVMM 2012. Hosts and guests run Windows Server 2008 R2 SP1. The physical machines are Dell PowerEdge M710HD blades.

    Read the article

  • wuinstall doesn't work with winrs

    - by wizard
    I've been having issues with psexec so I've been migrating to use winrs (part of the winrm system). It's a very nice remoting tool which is proving to be more reliable then psexec. Wuinstall is used to install available windows updates. The two however don't play well together. I'm working on a verity of windows servers 2003, 2008 and 2008r2. Wuinstall behaves the same across all hosts and behaves as expected if executed locally by the same user. Command: winrs -r:server wuinstall /download Produces WUInstall.exe Version 1.1 Copyright by hs2n Informationstechnologie GmbH 2009 Visit: http://www.xeox.com, http://www.hs2n.at for new versions Searching for updates ... Criteria: IsInstalled=0 and Type='Software' Result Code: Succeeded 7 Updates found, listing all: Security Update for Windows Server 2008 R2 x64 Edition (KB2544893) Security Update for .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2518869) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2539635) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2572077) Security Update for Windows Server 2008 R2 x64 Edition (KB2588516) Security Update for Windows Server 2008 R2 x64 Edition (KB2620704) Security Update for Windows Server 2008 R2 x64 Edition (KB2617657) Downloading updates ... Error occured: CreateUpdateDownloader failed! Result CODE: 0x80070005 Return code: 1 Googling "0x80070005" finds "unspecified error" which isn't helpful. Thoughts? Is there a better way?

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Duplicate ping packages in Linux VirtualBox machine

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

  • Squid configuration for proxy server

    - by Ian Rob
    I have a server with 10 ip's that I want to give access to some friends via authentication but I'm stuck on squid's config file. Let's say I have these ip's available on my server: 212.77.23.10 212.77.1.10 68.44.82.112 And I want to allocate each one of them to a different user like so: 212.77.23.10 goes to user manilodisan using password 123456 212.77.1.10 goes to user manilodisan1 using password 123456 68.44.82.112 goes to user manilodisan2 using password 123456 I managed to add the passwords and authentication works ok but how do I do to restrict one user to one of the available ip's? I have a basic setup from different bits I found over the internet but nothing seems to work. Here's my squid.conf (all comments are removed to make it lighter): acl ip1 myip 212.77.23.10 acl ip2 myip 212.77.1.10 tcp_outgoing_address 212.77.23.10 ip1 tcp_outgoing_address 212.77.1.10 ip2 http_port 8888 visible_hostname weezie auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT hosts_file /etc/hosts forwarded_for off coredump_dir /var/spool/squid

    Read the article

  • Apache log rotation: logrotate vs rotatelogs vs chronolog

    - by Enrico
    I have been researching log rotation for my server which hosts ~5 fairly high traffic sites. From what I can tell, my options are to use logrotate or to use piped logging with either rotatelogs or chronolog. logrotate requires a restart of apache and both SIGHUP and SIGUSR1 restarts are less than ideal on high traffic sites, because either you drop a bunch of connections or you need to delay compressing the old log until all child processes have died naturally. Also, downtime can be quite significant if compression is enabled. Would using logrotate - without compression and with graceful restart - and compressing old logs after the fact be the best way to minimize downtime? chronolog and rotatelogs sound promising, but are not well documented. I couldn't find examples of using either in combination with vhost specific logs. The chronolog website says, "when the expanded filename changes, the current file is closed and a new one opened". Is this globally? Or is that per AccessLog, CustomLog or ErrorLog directive? Is there a significant difference between chronolog and rotatelogs?

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • IP address spoofing using Source Routing

    - by iamrohitbanga
    With IP options we can specify the route we want an IP packet to take while connecting to a server. If we know that a particular server provides some extra functionality based on the IP address can we not utilize this by spoofing an IP packet so that the source IP address is the privileged IP address and one of the hosts on the Source Routing is our own. So if the privileged IP address is x1 and server IP address is x2 and my own IP address is x3. I send a packet from x1 to x2 which is supposed to pass through x3. x1 does not actually send the packet. It is just that x2 thinks the packet came from x1 via x3. Now in response if x2 uses the same routing policy (as a matter of courtesy to x1) then all packets would be received by x3. Will the destination typically use the same IP address sequences as specified in the routing header so that packets coming from the server pass through my IP where I can get the required information? Can we not spoof a TCP connection in the above case? Is this attack used in practice?

    Read the article

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • Is anyone using KVM in production?

    - by Andy Shellam
    I've been trying to set up a pair of servers utilising KVM on Ubuntu 9.10 to host 8 virtual machines between them and ended up with various issues from the VMs freezing, to not powering on. I had one virtual server set up and running and was setting up a second, when any operation involving OpenSSL would cause the VM to lock up in a weird way - all network traffic would cease, it wouldn't process logins on the console, but it wasn't taking any CPU time off the host. The first virtual server was identical and worked perfectly. Another VM I tried to setup had installed Ubuntu fine then refused to reboot, throwing kernel exceptions to do with XFS. I've now installed Citrix XenServer 5.5 on both hosts, and am now setting up my third VM with absolutely no issues. I also had the same experience when I tried VMware, but I preferred Xen as it appears to give more features on the free license. My question is am I just unlucky with KVM, or is KVM as unstable as it appears? Are you using, or planning on using, KVM in production, and how successful have you been?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >