Search Results

Search found 3893 results on 156 pages for 'sbs 2011'.

Page 120/156 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • ISA - Why do i suddenly need to specify settings on the client machines?

    - by user40623
    Have been pulling my hair out over this one for hours, and wondering if someone can help. I have a server with two network interfaces that was running isa 2004 and humming until i installed sbs service pack + isa 2004 service pack 3. Grrrrrrrrrr I can get the client machines to connect to the internet but only by installing the ISA Client on each machine or setting proxy settings etc in the internet settings. Ideally i want the client machines to just work with no configuration or proxy server settings as they were before. Where do i start, and what other information do you need from me if any to find out what im doing wrong Thanks!

    Read the article

  • Mindtouch broke my Apache2 virtual host configuration.

    - by grenade
    I installed mindtouch using the instructions here and it seems to have broken my Virtual Host configuration. I have several domains running off the same apache instance and this was working fine but now all my domain names resolve to the virtualhost where mindtouch was installed. So mindtouch made all my domain names point to the new mindtouch instance. Grrr! I use debians default virtual host mechanisms (sites-enabled, etc). Does anyone know what apache directive mindtouch is using to ruin my vh setup? I've scoured all the conf files and there is nothing obvious in apache2.conf or httpd.conf that would cause the behaviour. Did it create a sym-link somewhere that I should destroy? I should add that I uninstalled the mindtouch packages already but apache persists in redirecting all domains to the first one mentioned in the sites-enabled folder. thini:~# apache2ctl -S [Wed Jan 05 13:39:11 2011] [warn] NameVirtualHost *:80 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:* www.openancestry.org (/etc/apache2/sites-enabled/openancestry.org:1) *:* www.pragmantra.com (/etc/apache2/sites-enabled/pragmantra.com:1) *:* services.pragmantra.com (/etc/apache2/sites-enabled/services.pragmantra.com:1) *:* www.subversionreports.com (/etc/apache2/sites-enabled/subversionreports.com:1) *:* www.thijssen.ch (/etc/apache2/sites-enabled/thijssen.ch:1) Syntax OK

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

  • DKIM- Filter No Signature Data

    - by Vineet Sharma
    I have installed DKIM-Filter on Postfix after reading this tutorial http://www.unibia.com/unibianet/systems-networking/how-setup-domainkeys-identified-mail-dkim-postfix-and-ubuntu-server My email now has a DKIM signature but still it is landing in the SPAM folder. Here is the header Received-SPF: neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=69.164.193.167; Authentication-Results: mx.google.com; spf=neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=hardfail (test mode) [email protected] Received: from promote.a2labs.in (localhost [127.0.0.1]) by promote.a2labs.in (Postfix) with ESMTPA id 34858530E8 for <[email protected]>; Mon, 28 Feb 2011 12:23:07 +0530 (IST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=a2labs.in; s=mail; t=1298875987; bh=bo+H1VYPIHMja2u7i1lnzr4k/j4Pe8iSf79bVw94XpI=; h=To:Subject:Message-ID:Date:From:Reply-To:MIME-Version: Content-Type:Content-Transfer-Encoding; b=nhTdlnUwo0iUJ92ycQzKSRjw 5Pfya0DJcJrAc8Mr2hIv8OLpgzBCzdOMWTGqR5nuUmAzgCGYBhYAM2XZwVxo9JG/iz7 oYKysmNQnskFx0TRyW3UOkDWcfHcPnCL6Y7fGzZWinmsyjsg47k+mKZg/e8jqlwTAMO PYKkt5pBz7SM0= Also my mail.err file shows Feb 28 12:17:03 ivineet dkim-filter[32181]: 1F788530E1: no signature data Feb 28 12:18:02 ivineet dkim-filter[32181]: 432BA530E2: no signature data How to fix it

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Windows: How to make programs think they're not running in a terminal server session?

    - by sinni800
    I am using the program "SoftXPand 2011 Duo" by Miniframe on my Windows 7 PC. It makes two workstations out of one computer. It uses the terminal services built into Windows to create the additional session. I use two screens, two keyboards and two mice to create this "illusion" of two computers. It works quite well and I can even play two different 3D games on the two screens attached to this single machine (using a Radeon HD5770 and a Core i5 2500k with 8 Gbytes RAM). There are a few downsides to this. I just found about one that is hidden on the first look. The sessions you are in (even on the first workstation) will identify as a terminal server session! Now some programs will run with limited effects (graphical), and some won't run at all. This also resulted in some games not running at all. They just say "Cannot be run in a terminal server session" and exit. I have already proven that top modern games (DirectX 10, 11) run just as good as on the same machine without SoftXPand, so this is a pretty artificial limitation! So, can I somehow hack my current session so it doesn't look like a terminal server session anymore? I. E. #include <windows.h> #pragma comment(lib, "user32.lib") BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } Will return FALSE? (Not a programming question! Just an example how programs detect if they're in a terminal server session!)

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • How to authenticate users in nested groups in Apache LDAP?

    - by mark
    I've working LDAP authentication with the following setup AuthName "whatever" AuthType Basic AuthBasicProvider ldap AuthLDAPUrl "ldap://server/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=company,DC=local?sAMAccountName?sub?(objectClass=*)" Require ldap-group CN=MySpecificGroup,OU=Security Groups,OU=MyBusiness,DC=company,DC=local This works, however I've to put all users I want to authenticate into MySpecificGroup. But on LDAP server I've configured that MySpecificGroup also contains the group MyOtherGroup with another list of users. But those users in MyOtherGroup are not authenticated, I've to manually add them all to MySpecificGroup and basically can't use the nested grouping. I'm using Windows SBS 2003. Is there a way to configure Apache LDAP to do this? Or is there a problem with possible infinite recursion and thus not allowed?

    Read the article

  • Preventing DDOS/SYN attacks (as far as possible)

    - by Godius
    Recently my CENTOS machine has been under many attacks. I run MRTG and the TCP connections graph shoots up like crazy when an attack is going on. It results in the machine becoming inaccessible. My MRTG graph: mrtg graph This is my current /etc/sysctl.conf config # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_max_syn_backlog = 1280 Futher more in my Iptables file (/etc/sysconfig/iptables ) I only have this setup # Generated by iptables-save v1.3.5 on Mon Feb 14 07:07:31 2011 *filter :INPUT ACCEPT [1139630:287215872] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1222418:555508541] Together with the settings above, there are about 800 IP's blocked via the iptables file by lines like: -A INPUT -s 82.77.119.47 -j DROP These have all been added by my hoster, when Ive emailed them in the past about attacks. Im no expert, but im not sure if this is ideal. My question is, what are some good things to add to the iptables file and possibly other files which would make it harder for the attackers to attack my machine without closing out any non-attacking users. Thanks in advance!

    Read the article

  • Symantec CPS / Backup Exec 11D Service stuck in "Starting" Status

    - by user42289
    I have two Windows 2003 (one is SE, one is SBS) both SP2, both are Virtual Machines of Microsoft Virtual Server 2005 R2. All of a sudden about 2 weeks ago, the Symantec Backup Exec / CPS 11D stopped working on them. One is the Media server, one is our Exchange 2003 Server. There is another copy of CPS on our file server that the service is running fine on. However the one that is fine is not a VM. When I say stop working, the "backup exec continuous protection agent" service is stuck in "starting" status. On the non Exchange server I've tried uninstalling the last Windows Updates that were run some time around the time of failure. I've tried repairing the install of CPS. I've tried uninstalling it and reinstalling. Exact same problem in the end.

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. Edit Additional info.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >