Search Results

Search found 3177 results on 128 pages for 'david mannock'.

Page 32/128 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • WebDAV, SVN, Apache2 on OS X 10.6.2

    - by David
    I'm trying to configure SVN server on OS X 10.6.2 via Apache2. Whenever I enable the module in Wev - Settings - Services then try accessing the site, it fails. So I attempted an apachectl configtest and I'm getting a crazy amount of errors: httpd: Syntax error on line 132 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_dav_fs.so into server: dlopen(/usr/libexec/apache2/mod_dav_fs.so, 10): Symbol not found: _dav_add_response\n Referenced from: /usr/libexec/apache2/mode_dev_fs.so\n Expected in: flat namespace\n in /usr/livexec/apache2/mod_dav_fs.so When I attempt disabling the related services and enabling then one by one (dav_fs_module, dav_module, dav_svn_module and authz_svn_mod each seems to fail. What the heck. :-( Thanks in advance, Dave

    Read the article

  • Can't Delete Old Windows Directory

    - by David Mullin
    I got a new SSD drive for my computer, and have installed Windows on this drive. This left an old Windows directory on my old normal drive. I am now attempting to delete this old Windows directory, but am getting blocked by security. If I crawl down into each subdirectory, I can manually change the ownership and access rights for each file, but if I attempt to do it from the root directory, I get a "Failed to enumerate objects in the container. Access is denied" error. I have tried logging in as local Administrator, but this had the same effect. I figure that I am missing something stupid, but I just can't determine what it is.

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • WebDAV, SVN, Apache2 on OS X 10.6.2

    - by David
    Hi there, I'm trying to configure SVN server on OS X 10.6.2 via Apache2. Whenever I enable the module in Wev - Settings - Services then try accessing the site, it fails. So I attempted an apachectl configtest and I'm getting a crazy amount of errors: httpd: Syntax error on line 132 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_dav_fs.so into server: dlopen(/usr/libexec/apache2/mod_dav_fs.so, 10): Symbol not found: _dav_add_response\n Referenced from: /usr/libexec/apache2/mode_dev_fs.so\n Expected in: flat namespace\n in /usr/livexec/apache2/mod_dav_fs.so When I attempt disabling the related services and enabling then one by one (dav_fs_module, dav_module, dav_svn_module and authz_svn_mod each seems to fail. What the heck. :-( Thanks in advance, Dave

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • What's the performance on USB docking stations, and can they be used when laptop is closed?

    - by David
    I'm looking into a docking station for a Dell Studio laptop. I don't see the traditional docking stations I'm familiar with - the kind (for a Dell Latitude, for example) where you sit the laptop on top of a long row of pins. Instead, I'm seeing a lot of USB docking stations. When I close my laptop, I want it to go into sleep mode. If I then connect a USB docking station to the laptop while it's closed, will it wake up? What's the performance on USB 2.0 docking stations with a new Dell Studio? Can all of the video and internet traffic really go through a USB 2.0 connection while still providing the best video frame rates and internet speeds? When you undock, I assume you'd have to use the "Safely Remove Hardware" feature in Windows. Will that successfully 'remove' everything attached to the docking station - external drives, thumb drives, etc?

    Read the article

  • SAS vs Near-line SAS vs SATA

    - by David
    I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent. The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that? "Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper). Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that work out now? And what's the deal with Near-line SATA? I apologise about the rambling tone in this message, it's 5am and I haven't slept much.

    Read the article

  • How come the ls command prints in multiple columns on tty but only one column everywhere else?

    - by David Lou
    Even after using Unix-like OSes for a couple years, this behaviour still baffles me. When I use the ls command in a directory that has lots of files, the output is usually nicely formatted into multiple columns. Here's an example: $ ls a.txt C.txt f.txt H.txt k.txt M.txt p.txt R.txt u.txt W.txt z.txt A.txt d.txt F.txt i.txt K.txt n.txt P.txt s.txt U.txt x.txt Z.txt b.txt D.txt g.txt I.txt l.txt N.txt q.txt S.txt v.txt X.txt B.txt e.txt G.txt j.txt L.txt o.txt Q.txt t.txt V.txt y.txt c.txt E.txt h.txt J.txt m.txt O.txt r.txt T.txt w.txt Y.txt However, if I try to redirect the output to a file, or pipe it to another command, only a single column appears in the output. Using the same example directory as above, here's what I get when I pipe ls to wc: $ ls | wc 52 52 312 In other words, wc thinks there are 52 lines, even though the output to the terminal has only 5. I haven't observed this behaviour in any other command. Would you like to explain this to me?

    Read the article

  • DSL Modem with Wireless Router

    - by David
    I have a D-Link WBR-1310 wireless router and a TP-Link TD-8616 DSL modem. My old DSL modem died recently and I got the TP-Link as a replacement. With my old DSL modem, I plugged it into the WAN port on my D-Link and I could reach the internet through wireless and through the network. However, when I plugged the new TP-Link into the WAN port, I was not able to get any internet connectivity (either on the network ports or through wireless). So I plugged my labtop directly into the TP-Link DSL modem and I was able to get internet connectivity. I'm trying to figure out why my labtop can see the internet connection, but not the D-Link router. I think that the problem is due to the IP networking. My D-Link was originally set to have IP address 192.168.1.1. According to the documentation for the TP-Link DSL modem, it uses 192.168.1.1 as its IP address. I do not believe that my old DSL modem had an IP address. I logged into my D-Link router and changed its IP address to 192.168.1.2 and restarted it. Unfortunately, I still could not see the internet from my wireless devices. I've read a few forum postings which implied that I needed to setup a "bridge" between the two networks. Does that sound correct? Why didn't my old DSL modem require a bridge? I read pg. 12-13 of my D-Link's manual and they suggest that I need to disable UPnP, DHCP, and then plug the DSL modem into one of the LAN ports on my router. I'm concerned about doing this since I don't think that the firewall will work if I plug my DSL modem into one of the LAN ports. I also have a home NAS on my network and I wouldn't want that to be available over the internet. Does anyone have any advice about how I can get my TPLink DSL modem to work with my D-Link router? Thanks!

    Read the article

  • Install Oracle Drive and TNS for Windows XP?

    - by David.Chu.ca
    I am building a box with Windows XP with some applications. One application requires connection to an Oracle database on remote. I have installed OracleXEClient.exe from Oracle download. The installation does install "Oracle Provider for OLE DB" driver. My problem is that I still cannot make connections to the remote Oracle db. The test I have done is to create a UDL file with Oracle provider OLE DB connection. The error message is: --------------------------- Microsoft Data Link Error --------------------------- Test connection failed because of an error in initializing provider. ORA-12154: TNS:could not resolve the connect identifier specified I think I may miss TNSNAMEC.ora in the box. I can find this file from another box where Oracle connection works fine. I am not sure what package I should install (from Oracle) so that the default TNSNAEMES.ora will be installed with related files and setup path for accessing the TNS file?

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • How can I expire non-active sessions on my Netscreen SSG140?

    - by David Mackintosh
    I have a Juniper Netscreen SSG-140. While experimenting with a VoIP service, I defined a custom policy that was to be used to permit the possible ports in use to be sent back to the VoIP server from systems connecting across the internet. Because I'd had problems in the past with VoIP systems getting broken when their UDP sessions were expired out faster than their keep-alives were generated, I set the timeout on this custom service to be 'never'. After much experimentation, I happened to notice that my session count on the firewall has grown from a couple thousand to over 36000. After discussion with the VoIP "expert", I set the timeout to be 30 minutes; however, all the sessions set up during the experimentation process are still there, more than 3 days later. Is there a way I can force these old sessions to get expired and removed from the session table, or am I looking at resetting my firewall? (Both firewalls, actually -- they are in a cluster.)

    Read the article

  • How can I get Thunderbird to automatically move messages?

    - by David Heffernan
    I have Thunderbird 15. I'd like to automatically move messages from one folder to another. My mail account is an IMAP account. My Blackberry is also connected to the account and when it sends mail, it places a copy on the IMAP server in a folder named Sent Items. I'd like those messages to be moved to my Inbox automatically. By default message filters are only applied automatically to the Inbox. There is an extension to do this, Filter Subfolders, but it's only for TB3. What I have tried so far is: Use the FiltaQuilla add-on to be able to filter messages for folder name. Set the string property mail.server.default.applyIncomingFilters to true. As recommended here: http://blog.mozilla.org/bcrowder/ But I can't get these filters to run automatically. I have a suspicion that filters only run automatically for incoming mail. And these are sent items. Perhaps that's it. I just don't know. On the other hand, if I run the filters manually on that folder, it does indeed move the mail. Or perhaps the issue is that these messages are saved into the Sent Items folder marked as read. Is it possible that filters are only automatically applied to unread items? If I could install an add-in that automatically ran the message filter on my folder, that would do it. Anyway, I'm at a loss now. Any suggestions are welcome. I'm not at all wedded to using filters. I just want to find a way to get these messages moved without human interaction!

    Read the article

  • FTPS SSH Host Key after IP Address Change

    - by David George
    I have a Secure FTP (FTPS) server that my remote sites to upload files to daily via scripted routines that run. I have had issues in the past when upgrading hardware and deploying new servers causing the RSA Fingerprint to change for that server. Then all my remote sites can't connect until I have the old key removed (usually via ssh_keygen -r myserver.com). I now have to change the IP address for myserver.com and I wondered if there is anyway to proactively generate new host keys so that when the server address changes all my FTPS client remote sites don't break?

    Read the article

  • xp vpn client dns issue

    - by David Archer
    Hi All, I have a problem with dns when connected to my work vpn. For ease of explanation I'll use the following in my outline of the problem: - name of my machine on work network is REMOTE_XP (original i know) - ip of my machine on work network is 192.168.2.80 - name of my machine on my local network is LOCAL_XP - ip of my machine on my local network is 10.0.0.3 What I want to be able to do when connected to vpn: - browse the internet from LOCAL_XP - ping by name REMOTE_XP Now it seems I've so far mentioned either 1 but not both of my wishlist. If i go to my vpn network properties (on LOCAL_XP) and uncheck the "use default dns on remote network" then I can browse the internet from my local machine but can't ping REMOTE_XP (though I can ping 192.168.2.80) If I check "use default dns..." then I can ping REMOTE_XP but can't browse the internet from LOCAL_XP. Is there a way I can have my dns cake and eat it, or will I have to accept that it will be an either/or situation? Thanks in advance.

    Read the article

  • Apache and mod_authn_dbd DBDPrepareSQL error

    - by David Brown
    I am running Apache 2.2.17 on Suse Linux 11. I have installed the mod_authn_dbd module to allow authentication using a MySQL database. Simple digest authentication works absolutely fine using the following directive: AuthDBDUserRealmQuery "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" However, I would like to both generalize this statement and prepare it for performance and security reasons. Thus, I used the following directives instead: DBDPrepareSQL "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" digestLogin AuthDBDUserRealmQuery digestLogin These directives generate the following errors: [...] [error] (20014)Internal error: DBD: failed to prepare SQL statements: [...] [error] (20014)Internal error: DBD: failed to initialise Why does my actual SQL statement work when used directly, but not when I try to prepare it? (Note I have tried using '?' and '%%' in place of '%', to no effect.)

    Read the article

  • Cannot access Tomcat application remotely, but can access Apache applications

    - by David Keaveny
    I am installing Atlassian's Confluence 4.2 on a clean Ubuntu 12.04 server. Confluence runs on Tomcat 6, and uses PostgreSQL 9.1 as its datastore. I've installed and configured phpPgAdmin to manage PostgreSQL, and Zentyal to manage the server generally. Both these applications use Apache. The problem that I am experiencing is that while I can access phpPgAdmin and Zentyal without problem from a remote PC, I can only access Confluence when running locally (either specified by localhost, IP address or host name). Instead I get an HTTP 502 Connection Failed error. By way of experimentation, I also installed Ajenti, which appears to use lighttpd rather than Apache or Tomcat, and it too works fine when connected to locally, but gives me the same HTTP 502 error when connected to remotely. So applications served from Apache work fine, but applications served from other services do not - does that ring a bell with anyone? It's been over 10 years since I last sysadmin'ed a Linux box, so I'm more than a little rusty!

    Read the article

  • How to configure Farsun USB barcode scanner to not auto-trigger

    - by David Grayson
    At my company we have several USB barcode scanners. I'm not sure exactly what model they are, but I think they are the FG9800 from Farsun because that's what they look like on the exterior. They came with a programming manual that is very similar to this document from the Farsun website. When I scan the "Output Firmware Version" barcode, my scanner types the following into the computer: Farsun V2.00 2011-01-01 Is it possible to configure these scanners so they only read barcodes in response to the trigger button being pressed? I don't want them to automatically read barcodes. Additionally, I want this setting to be remembered while the scanner is turned off. Since this scanner only has a USB port, the only way to configure it that I know of is to scan bar codes from the manual (or make your own). I have tried scanning the configuration bar codes for Single Scan (013300), Single Scan No Trigger (013301), and Laser/CCD Timeout - 5 Seconds (0134005) from this document. Sometimes (but not often) this puts the scanner in to the right mode, where it only scans when the button is pressed. Unfortunately, the scanner seems to always leave this mode when it is power cycled. I have also scanned the "Reset Configuration To Defaults" barcode (0B) many times. We have three different scanners like this and I have not been able to successfully configure any of them. If the things I want are not possible with these Farsun-based scanners, is there some other scanner we can use?

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • o3d javascript uncaught referenceerror

    - by David
    hey, im new to javascript and am intersted in creating a small o3d script: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test Game Website</title> </head> <body> <script type="text/javascript" src="o3djs/base.js"></script> <script type = "text/javascript" id="myscript"> o3djs.require('o3djs.camera'); window.onload = init; function init(){ document.write("jkjewfjnwle"); } </script> <div align="background"> <div id="game_container" style="margin: 0px auto; clear: both; background-image: url('./tmp.png'); width: 800px; height:600px; padding: 0px; background-repeat: no-repeat; padding-top: 1px;"></div> </div> </body> </html> the browser cant seem to find o3djs/base.js in this line <script type="text/javascript" src="o3djs/base.js"></script> and gives me an uncaught referenceerror at this line o3djs.require('o3djs.camera'); Obviously, because it can't find the o3djs/base.js... I have installed the o3d pluggin from google and they say that should be IT ive tried on firefox, ie and chrome thanks

    Read the article

  • VPN Device behind router/firewall

    - by David Steven
    ROUTER A: Peplink 310 serving as the gateway/router/firewall at one location. ROUTER B: Linksys RV082 serving as the gateway/router/firewall at another location. I want to VPN these two locations together. The Peplink has a PPTP server and has proprietary site-to-site vpn if you had another peplink device. The Linksys has an IPsec vpn server. VPN A: I also have another spare linksys rv082. I'm trying to setup the other rv082 (VPN A) behind the peplink (ROUTER A) and get VPN A to talk to ROUTER B. I setup VPN A with a lan ip address and plugged one of it's LAN ports into the LAN. I was able to get to it's web interface fine. On ROUTER A I one-to-one nat mapped one of our public ip's to the LAN IP for VPN A. I opened TCP 50-51 and UDP 500 to VPN A. I configured the VPN settings on VPN A to connect to ROUTER B. I did the opposite for ROUTER B. But the vpn doesn't connect. Then I tried pluging VPN A's wan port into the lan, and gave it another LAN IP. I thought perpahs VPN A didn't want to send VPN traffic out over the LAN and wanted do send it over it's WAN. The vpn still doesn't connect. It what I'm trying to do even possible?

    Read the article

  • Configure VirtualHost to Rewrite HTTP://subdomain... to HTTPS://internaldirectory

    - by David Kaczynski
    How do I configure Apache to rewrite an http request for a subdomain to an https request for the correct directory? For example, I have the following VirtualHost configuration: However, this turns http://redmine...us into https://redmine...us/redmine. Also, changing RewriteRule ^(.*)$ https://%{HTTP_HOST}/redmine [R] to RewriteRule ^(.*)$ https://%{HTTP_HOST} [R] seems to simply redirect the HTTP request to HTTP://...us, which is currently the default /var/www/index.html page. Any suggestions?

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >