Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 122/163 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

  • Cisco IOS ACL: Don't permit incoming connections just because they are from port 80

    - by cjavapro
    I am going much based on my memory and I may not be correct on all of this. On a Cisco 851 (IOS) that uses a BVI or a bridge-route (the servers on the inside are configured with static and public IP addresses). I would apply two access lists (both end with deny ip any any log) on FastEthernet4 (the WAN port). There would be one for FA4 in and another for FA4 out. FA4 out would have a line like access-list 110 permit 98.76.54.0 0.0.0.255 gt 1023 any eq http I think this means from 98.76.54.* with a from port of at least 1024 can connect to any other machine with a destination port 80. So, then I have to allow the response to the HTTP connection. FA4 in would have a line like access-list 120 permit any eq http 98.76.54.0 0.0.0.255 gt 1023 Now the problem with that is that anybody on the outside can set their from port to port 80 and then connect to any inside port that is at least 1024. How do we prevent this and require the incoming data to be a response to the outgoing data.

    Read the article

  • php is not recognized as an intern command (using windows)

    - by Tristan
    Hello, I want to develop using a framework called symfony BUT, i do not have a MAC and i don't want to dual boot with Debian. I tryed virtual hosts via Virtual Box but it's too crappy. So i decided to stay on windows So when the tutorial tells me to do php lib/vendor/symfony/data/bin/check_configuration.php i do in the windows cmd : php lib\vendor\symfony\data\bin\check_configuration.php and it tells me : 'php' is not recognized as an internal or external command I use UwAmp (php 5.2.12 + mysql + apache) which is stored in E:\logiciels\UwAmp\ inside i have buch of file, but i guess that is important : E:\logiciels\UwAmp\apache\php_5.2.11 How to make the cmd PHP in the windows cmd work propely please?

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • CURL alternative - Design ideas

    - by Vincent
    All, I am looking for some web application design ideas here. I have a server X that hosts an SDK, which has the capacity to talk a piece of hardware. When I make an HTTPS request from an external PHP web application (hosted on Server Y) to Server X through curl, Server X gives JSON data as a response. I use this data to render my UI for the web app on Server Y. The above method seems to be slow and has a tendency to fail in production if there are too many concurrent requests. Can anybody let me know if there is an alternative to CURL or any other design people are using to pull data like this from servers? Thanks

    Read the article

  • installing mod_wsgi giving 403 error

    - by John Smiith
    installing mod_wsgi giving 403 error httpd.conf i added code below WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> wsgi_handler.py status = ‘200 OK’ output = ‘Hello World!’ response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Note: localhost is my virtual host domain and it is working fine but when i request http://localhost/wsgi/ got 403 error. <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Note: My apache is not in c:/xampp/bin/apache it is in c:/xampp/bin/server-apache/

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • Sending mail via sendmail from python

    - by Nate
    If I want to send mail not via SMTP, but rather via sendmail, is there a library for python that encapsulates this process? Better yet, is there a good library that abstracts the whole 'sendmail -versus- smtp' choice? I'll be running this script on a bunch of unix hosts, only some of which are listening on localhost:25; a few of these are part of embedded systems and can't be set up to accept SMTP. As part of Good Practice, I'd really like to have the library take care of header injection vulnerabilities itself -- so just dumping a string to popen('/usr/bin/sendmail', 'w') is a little closer to the metal than I'd like. If the answer is 'go write a library,' so be it ;-)

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Windows thinks outgoing connections are incoming connections?

    - by Slayer537
    I have a rather weird issue.. I'm trying to configure Windows Firewall to block all outgoing connections to a certain app, but allow all incoming. This app is used to transfer files across a network. The reason for this type of setup is to only allow certain users (IP Address) access to the files I have, but to still allow others to see what's available. Since Windows Firewall defaults to allowing all outgoing connections, I made a rule to deny all outgoing connections that were not in the IP ranges I specified. For the incoming connections, I'd like to leave it at allow all, but at the moment it is set to only allow the connections that also have outgoing permissions set. If I blanket say allow all incoming connections, I observe that unauthorized IP Address are able to actually download files, even though their IP was blocked in the outgoing connections. To shed a little more visibility on this, I used NetLimiter to see what was going on. NetLimiter showed me that the connection was an incoming connection. Shouldn't this be an outgoing connection, as I am uploading files to them, not the other way around? Is there a way to make the connection type be correct and show up as outgoing instead of incoming?

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Apache vhosts config: Host Name instead of IP Address

    - by Johe Green
    I have a domain (example.com) hosted at an external provider. I directed the subdomain sub.example.com to my ubuntu server (12.04 with apache2). On my ubuntu server I have a vhost setup like this. The rest of the config is basically apache 2 standard: <VirtualHost *:80> ServerName sub.example.com ServerAlias sub.example.com ServerAdmin [email protected] DocumentRoot /var/www/sub.example.com ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined WSGIScriptAlias / /home/application/sub.example.com/wsgi.py <Directory /home/application/sub.example.com> <Files wsgi.py> Order allow,deny Allow from all </Files> </Directory> </VirtualHost> When I enter http://sub.example.com in my browser my application shows up fine. But the domain is replaced by the IP address of my server. Do I have to configure my server somewhere else to deliver all its content under my domain sub.example.com?

    Read the article

  • 403 Error when accessing vhost directive

    - by Ortix92
    I'm having some troubles with setting up my webserver (Centos 5.8). It's a brand new server and I'm trying to set a vhost to the following dir: /home/exo/public_html However whenever I restart httpd I get the following warning: Code: Starting httpd: Warning: DocumentRoot [/home/exo/public_html] does not exist Yes the directory does exist. So whenever I visit the domain exo-l.com it gives me a 403 error. This is my config file (I put this inside my httpd.conf because the files in conf.d were not included for some reason. Or at least my newly created vhost conf file, but that has 0 priority for now) <VirtualHost *:80> DocumentRoot /home/exo/public_html ServerName www.exo-l.com ServerAlias exo-l.com <Directory /home/exo/public_html> Order allow,deny Allow from all </Directory> </VirtualHost I'm completely clueless because this should work as far as I know. httpd is being run as apache:apache i tried chowning the public_html directory (also recursively) to exo:apache, apache:apache, root:root with no success. chmod 777 doesn't do anything either. a tail from the log: [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied I also found something about selinux and that disabling it might help, but do I really want to do that?

    Read the article

  • make mongrel_rails (localhost:3000) visible to a virtual machine

    - by Max Williams
    I develop rails in ubuntu and i just set up a virtualbox windows xp virtual machine for IE testing. I'd like to be able to run mongrel_rails in ubuntu and then jump into the vm to check it out, so i can jump back, make a change, jump into the vm again, reload the page and test it, etc. Is this possible? In this sort of situation in the past i've had to set up an apache server on my dev machine and run mongrel under that, in order to get an externally visible (ie visible to my local network) ip address that i then paste into the address bar of IE in the vm. Is this really necessary? Is there a simpler way? Can i do something with my /etc/hosts or sites-available files to just make up some arbitrary network address which points to localhost:3000 in ubuntu? Or something? thanks, max

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

  • Failed none and iptables

    - by Michael
    The problem is that when I ssh to my host with putty and enter user name, after that the password prompt delays. Found this is directly related to my iptables and can solve by changing default policy to ACCEPT. If default INPUT policy is ACCEPT, then password prompt is coming immediately. Mar 13 00:05:01 server-ubuntu sshd[6154]: Connection from 192.168.0.10 port 26304 Mar 13 00:05:06 server-ubuntu sshd[6154]: Failed none for acid from 192.168.0.10 port 26304 ssh2 However, if default INPUT policy is DROP, I got slight delay in getting password prompt after I enter username Mar 13 00:07:12 server-ubuntu sshd[6177]: Connection from 192.168.0.10 port 26333 Mar 13 00:07:35 server-ubuntu sshd[6177]: Failed none for acid from 192.168.0.10 port 26333 ssh2 For the second case, I tried to set default policy for FORWARD and OUTPUT chains to ACCEPT, but it didn't help. The only rule in this case is: -A INPUT -i eth1 -m mac --mac-source 00:26:XX:XX:XX:XX -j ACCEPT 00:26:XX:XX:XX:XX is the mac address from which I am trying to ssh to server's LAN(eth1). I'm sure there has to be some rule, which I can use while default INPUT chain policy is DENY in order to get password prompt immediately. I realize that the error message in the log is something normal and part of some verification procedure.

    Read the article

  • Session troubles when used on BT hosting

    - by YsoL8
    Hello I have developed a site for a client with a pre-existing BT hosting package. Since going live there has been a problem where $_session loses it's data between pages. I have previously fixed the problem, but somehow it has become unfixed. Last time this problem happened, my research indicated that there is something funny with BT's setup when using sessions, and that article provided this code: if(ini_get('register_globals') == 1) if(is_array($_SESSION)) foreach(array_keys($_SESSION) as $var_to_kill) unset($$var_to_kill); Which even though it looks broken to me, did in fact reduce the problem to a very low level. (i.e one drop out a day). However, today my client is in touch again and surprise surprise, the problem is back. Does anyone know of a solution? (My client has already stated they will not change hosts!) Oliver

    Read the article

  • Windows authetication with Silverlight custom binding.

    - by sfx
    Hello, I am trying to set up security within a web.config file for a WCF service hosted in IIS but keep getting the error message: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. I have read Nicholas Allen’s blog (link text) and it appears that this is the route that I need to take. However, I am using “binaryMessageEncoding” in a customBinding for my Silverlight service, and as such, I’m not sure how to apply this type of security to such an element. This is how my custom binding looks in config at present: <customBinding> <binding name="silverlightBinaryBinding"> <binaryMessageEncoding /> <httpTransport /> </binding> </customBinding> Has anyone had any experience getting Windows authentication to work with a custom binding using binaryMessageEncoding? Cheers, sfx

    Read the article

  • How to redirect a name-based VirtualHost to a different port?

    - by Andra
    I have a virtuoso sparql endpoint installed, which I want to make available through a hostname (e.g. www.virtuosoexample.com). The thing with virtuoso is that the is no Document root. The endpoint is initiated by the daemon and made available through a source port (e.g. localhost:1234/) I know how to set a virtual host pointing to a document root, but i don't know how to do this for a server with a port number. Any advice would be appreciated. Below is the code, how I would do it with a document root. I tried to change that (naively) into localhost:1234/sparql, but that didn't work <VirtualHost * ServerName www.virtuosoexample.com <www.virtuosoexample.com> ServerAlias www.virtuosoexample.com <www.virtuosoexample.com> ErrorLog /var/log/apache2/error.wp-sparql.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.wp-sparql.log combined DocumentRoot /var/www/endpoint/sparql/ <Directory /var/www/endpoint/sparql> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost

    Read the article

  • Using a nat rule to translate 80/443 traffic to web server, but internal users cannot access it using external ip/domain name

    - by Josh
    I am using Cisco ASDM for ASA I have my internal network called soa. My outside interface is called outside. Let's say my outside IP given to me by my ISP isp is y.y.y.y I have a web server inside my network with a static ip of x.x.x.110. I have configured 2 static nat rules (one for http the other for https). Source is x.x.x.110. Interface is outside, service (http or https). Maybe I am doing this wrong, but when I run the packet tracer, I choose outside interface and for the source IP I used 8.8.8.8 and the destination ip is my outside IP address, y.y.y.y When I run that, it shows the packet traversing successfully, using 9 steps. For my other test, I switch to the soa interface, input an ip on that network, and leave the destination the same. This test comes up with 2 steps and then fails on my access list. When I see the rule that fails, it is my catch all which is source: any desitnation: any, service: ip action: deny. What rule do I need to make to allow my soa network access to go out and come back in by my external IP addess (using a domain name attached to that ip in my dns, of course)?

    Read the article

  • Session being reset when using sub-domain.

    - by Adam Witko
    Hi, I'm trying to use sub-domains in my ASP.NET website but I'm coming across a few problems with the session being reset. I've edited my hosts file to have 'localhost', 'one.localhost' and 'two.localhost'. I can go to any of these URLs and do what I need to do and login to my system. The session mode is defined as follows in the web.config: I'm using SQLServer as the website will be ran as a webfarm. What I'm finding is when I click something that causes a postback all the session is lost and a new session id is created, when this occurs my website is now 'localhost' rather than the logged in 'one.localhost' for example. Does anyone know what might be causing this? Cheers

    Read the article

  • Ping with explicit next-hop selection (aka Monitoring multiple default gateways)

    - by Michuelnik
    I have a linux (debian) router with two internet connections (A) and (B). (A) is preferred, (B) is fallback. I want to monitor the internet connection (and not only the availability of the gateways!) and change the default route appropriately. If (A) is not providing internet, switch to (B) If (A) is providing internet again, switch back to (A). Only problem I have is in case (2). My routing table points towards a working internet so I cannot easily detect whether internet is working over link (A) again. I am search for a ping or traceroute (or other diagnosis-tool) which can select the next-hop explicitly. ping -r looks promising, but can only ping a host on the lan. (It only has to write another destination address in the packet, damnit!) traceroute -g gateway looks even more promising and nearly does what I want - but sets source routing options which my next-hops deny. (Not within my administrative boundary...) I just want a $ping, that can: select a source interface (and address) select a next-hop on that interface ping any arbitrary ip address I could do evil trickery with policy-based routing but that would have production impact for all users. I would like to see a side-effect-free solution....

    Read the article

  • session_start() Hangs The Server

    - by Narcissus
    Totally confused by this one... We have a WAMPServer installation set up, running a number of virtual hosts from various document roots. Just recently, one particular domain has started hanging the server. We traced it down to session_start(). If we comment it out, there are no problems (except, of course, for the fact that we can't do anything with the session). With it uncommented, it will hang the page load and, with enough reloads, will hang the entire server. All of the other sites still work perfectly with their sessions. As far as I know, there is nothing different with the way sessions are being worked with. I am looking further into it (in case someone changed something) but right now I'm hoping for some direction :) So, any thoughts?

    Read the article

  • SQLite subquery syntax/error/difference from MySQL

    - by Rudie
    I was under the impression this is valid SQLite syntax: SELECT *, (SELECT amount AS target FROM target_money WHERE start_year <= p.bill_year AND start_month <= p.bill_month ORDER BY start_year ASC, start_month ASC LIMIT 1) AS target FROM payments AS p; But I guess it's not, because SQLite returns this error: no such column: p.bill_year What's wrong with how I refer to p.bill_year? Yes, I am positive table payments hosts a column bill_year. Am I crazy or is this just valid SQL syntax? It would work in MySQL wouldn't it?? I don't have any other SQL present so I can't test others, but I thought SQLite was quite standardlike.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >