Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 538/1167 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • credit or minclass does not work well with pam_cracklib.so in common-password (opeSuSe 11.3)

    - by Mario
    I'm trying to implement password complexities on my pdc. It's a samba PDC with openLDAP backend. I tried cracklib-check but it looks like that I should have a decent and localize version of password library since the library out there usually comes in english. I also have another consideration that we will allow users to use any kind of password - even though it's dictionary based - as long as their passwords integrated with low/upper alphabet, digits, and other characters such as '$' or '_' (pam_cracklib.so calls them as classes). So here is my /etc/pam.d/common-password: #password requisite pam_pwcheck.so nullok cracklib password requisite pam_cracklib.so minclass=4 reject_username ##password requisite pam_cracklib.so \ ## dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 reject_username password optional pam_gnome_keyring.so use_autht_ok password required pam_unix2.so use_authtok nullok The first commented line (with #) was the default configuration of openSuse 11.3. The 2nd/3rd (with leading ##) is another configuration I use when minclass=4 line is commented out. By the way, I have 'check password script' = /usr/local/sbin/crackcheck -d /usr/share/cracklib/pw_dict and passdb backend = ldapsam:ldap://127.0.0.1 parameters in smb.conf and cracklib-check works fine too. So here is the test I conduct. I logon to windows and then change my password. Sometimes it works fine that it trows error message - which what I wanted, but simple password with only lower alphabets can pass windows change password. Maybe I should make a new library which incorporates local vocabularies, but a guy out there (raise your hand please if you read this :) ) also experienced the same trouble with english word. Besides, what we really want is to let user to choose 2 or 3 format password out of 4 classes. Is there a bug or something with pam module in openSuse 11.3? Thank you in advance. Regards, Mario

    Read the article

  • One Windows Domain workstation can ping gateway but gets no internet access

    - by dindeman
    One of the (Windows XP SP3) workstations of our Windows Domain could not access internet anymore, this problem suddenly happened overnight. The domain controllers (there are three of them) are all running Windows Server 2008. First I compared the output of ipconfig /all on the faulty workstation with the output of a working workstation and it was just fine as it had always been. In particular the default gateway was correct and always remained pingable from the faulty workstation. I guessed that something was wrong with the DHCP service and I restarted the DHCP server service on all of our three DCs as well as the DHCP client service on the faulty workstation. This didn't solve the issue. I then thought of renewing the DHCP lease with ipconfig /release and ipconfig /renew and here is my first question: why did this never work? The same IP address (192.168.0.45) kept being assigned despite all my attempts to renew it (note that all our workstation are getting their TCP/IP automatically.) Even by leaving the domain and changing the computer name the same address was yet again obtained... Anyway I then proceeded to switch the TCP/IP configuration for that machine manually to another free valid IP address (192.168.0.41)... and then the internet access came back! I then cleared any traces of the previous IP in the DHCP leases list and in the DNS tables of our DCs and, after setting back the TCP/IP configuration to 'automatic', finally, the new lease would be granted (192.168.0.41) alongside with the internet access. My second question: what went suddenly wrong with the original IP address?

    Read the article

  • Apache RewriteEngine, redirect sub-directory to another script

    - by Niklas R
    I've been trying to achieve this since about 1.5 hours now. I want to have the following transformations when requesting sites on my website: homepage.com/ => index.php homepage.com/archive => index.php?archive homepage.com/archive/site-01 => index.php?archive/site-01 homepage.com/files/css/main.css => requestfile.php?css/main.css The first three transformations can be done by using the following: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?(.*)$ index.php?$1 However, I'm stuck at the point where all requests to the files subdirectory should be redirected to requestfile.php. This is one of the tries I've done: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?files/(.+)$ requestfile.php?$1 RewriteRule ^/?(.*)$ index.php?$1 But that does not work. I've also tried to put [L] after the third line, but that didn't help as I'm using this configuration in .htaccess and sub-requests will transform that URL again, etc. I fuzzed with the RewriteCond command but I couldn't get it to work. How needs the configuration to look like to achieve what I desire?

    Read the article

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly [closed]

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • Apache forwarding to tomcat shows a blank page

    - by MNS
    I have an application running on tomcat at http ://www.example.com:9090/mycontext. The host name in server.xml points to www .example.com. I do not have localhost anymore. I am using apache to forward requests to tomcat using mod_proxy. Things work fine as long as the ProxyPath is /mycontext. The server name setup in virtual host is www .abc.com and http ://www.abc.com/mycontext works fine. However I would like to ignore the context path and simply use http://www.abc.com/ to forward requests to http://www.example.com:9090/mycontext. When I do this, apache shows me a blank page. What am I missing here? I have not changed anything in server.xml except the default host to www .example.com. <VirtualHost *:80> ServerName www.abc.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://www.example.com:9090/mycontext ProxyPassReverse / http://www.example.com:9090/mycontext </VirtualHost> Thanks

    Read the article

  • Allowing directory view/traversal for a specific VirtualHost in Apache 2.2

    - by warren
    I have the following vhost configured: <VirtualHost *:80> DocumentRoot /var/www/myvhost ServerName myv.host.com ServerAlias myv.host.com ErrorLog logs/myvhost-error_log CustomLog logs/myvhost-access_log combined ServerAdmin [email protected] <Directory /var/www/myvhost> AllowOverride All Options +Indexes </Directory> </VirtualHost> The configuration appears to be correct from the apachectl tool's perspective. However, I cannot get a directory listing on that vhost: Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. The error log shows the following: [Wed Mar 07 19:23:33 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/****** update2 More recently, the following is now kicking-into the error.log: [Wed Mar 07 20:16:10 2012] [error] [client 192.152.243.233] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html update3 Today, the following is getting kicked-out: [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/<mydir> [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html [Thu Mar 08 14:05:57 2012] [error] [client 66.6.145.214] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. This is after modifying the vhosts.conf file thusly: <VirtualHost *:80> DocumentRoot /var/www/<mydir> ServerName myhost ServerAlias myhost ErrorLog logs/myhost-error_log CustomLog logs/myhost-access_log combined ServerAdmin admin@myhost <Directory "/var/www/<mydir>"> Options All +Indexes +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> What is missing? update 4 All subdirectories of the root directory do directory listings properly - it is only the root which cannot.

    Read the article

  • Proxmox: VMs and different public IPs

    - by Raj
    I have a server which has two NICs and both are directly connected to internet. I have five different public IP addresses available for the VMs. The host machine (Proxmox) doesn't need to use any (it'll use a private IP and that's all) but will have internet connection. I've gone through the Proxmox documentation and I'm not able to understand the big picture to set up the right network configuration for my needs. In short, what I have is: One server (Proxmox, host machine) On that server, 5 VMs are created 5 public IP addresses available (one for each VM), let's say: 80.123.21.1, 80.123.21.2, 80.123.21.3, 80.123.21.4, 80.123.21.5 What I have now for the host is the following: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.101 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 auto vmbr1 iface vmbr1 inet manual It can be reached from the internal network, so that's OK. It has internet connection, which is also OK. vmbr1 is going to be used by the VMs. Each VM will have its own IP on his network interfaces configuration file. For some reason, VMs will not have internet and they won't be able to have public IP address. If I use NAT, it will work correctly, but they will not use the public allocated IP addresses for them. Am I missing something?

    Read the article

  • Public Folders - Delete Public Folders from 2003 after migrating to 2010 (via Adsiedit) - safe?

    - by HeavenCore
    Similar Question: How do I delete a public store in Exchange 2003? We are ready to remove our Exchange 2003 server after having migrated all public folders and mailboxes to 2010. We ran for a week with the exchange 2003 server shutdown and everything seemed to work. When I try to delete the PF database from 2003 it says it contains replicas. Whilst migrating i only had one was sync working (from 2003 to 2010) so i believe that 2003 hasn't received the responses from 2010 saying replica removed. When I look in Public folders on the 2003 box none are listed, when i look in PF Instances they are all listed. I know everything has moved to the 2010 server and I know 2010 is not showing the 2003 server as a replica for any folders. I am looking to use ADSI edit to remove the Public folder database from the 2003 server, but want to ensure i am going to delete the right thing so that they do not get deleted from the 2010 database. Should i delete configuration, Services, Microsoft Exchange, Company Name, Administrative groups, First administrative group, Servers, Server name, Information store, First storage group, public folder store (Server name)? Or something else? I have checked and the only public folder with the old exchange server listed as a replica is SYSTEM CONFIGURATION. Thanks in advance.

    Read the article

  • Can anyone help me make my reverse proxy actually cache?

    - by Lenary
    Hi folks, I'm trying to configure a Reverse Caching Proxy but so far have had no luck. I would preferrably like to use apache (that will be all it will be used for), but am open to solutions using other software that can also run on Mac OS X 10.6 (I have also tried using Varnish and Squid, but with no more luck). We're running a system with about 80 mac mini clients that will be requesting lots of video from a server. To reduce load, we thought we could use Apache (which comes on the macs by default) to cache this video forever (or at least as long as possible) onto the macs' disks. I have managed to get a reverse proxy set up with apache using ProxyPass etc, but when i tried to add CacheEnable disk / to the configuration, nothing happened (i do have mod_disk_cache included). Can anyone help with my issue? The apache config file is here Thanks in advance Edit: So far I have been testing it with smaller text files, and it hasn't been caching properly. This suggests it is nothing to do with us actually downloading video, but actually to do with the cache configuration.

    Read the article

  • Host is missing hostname and/or domain

    - by anlawang
    i use puppet 0.25.4 on ubuntu 10.04,when puppet installed ,i got the infor below : Nov 29 10:30:30 puppet puppetmasterd[4422]: Host is missing hostname and/or domain: pclient.example.com Nov 29 10:30:30 puppet puppetmasterd[4422]: Compiled catalog for pclient.example.com in 0.02 seconds i dont know how to fix it ,who can help me thank you ! my configuration : I use apt-get to install the puppet,so some configuration have been fixed puppet.conf on client : > [main] server=puppet.example.com > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=false > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post > certname=pclient.example.com > node_name=cert [puppetd] > runinterval=30 puppet.conf on server: > [main] logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=true > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post i user the default node on site.pp i am a newer to puppet,so i dont know the reason for these problems!! thank you again!!!

    Read the article

  • OpenVPN - Cannot browse ipv4 websites

    - by user1494428
    I have set up an openVPN tunnel on my VPS (OpenVZ - Ubuntu 12.04). The problem is I can only browse websites which support ipv6 like google. http://whatismyv6.com/ reports me that I've an ipv6 adress, so I guess this is the problem. Server configuration: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem push "route 10.8.0.0 255.255.255.0" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" push "redirect-gateway def1" comp-lzo persist-tun persist-key status openvpn-status.log log /var/log/openvpn.log verb 3 Client configuration: client remote xx.xx.xx.xx 1194 dev tun comp-lzo ca ca.crt cert client1.crt key client1.key redirect-gateway def1 verb 3 I have configured NAT with this command: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to xx.xx.xx.xx Can someone explain me how I can make it works (forcing ipv4?) I had the same problem with another vps and I also tried on another client (All Windows 7).

    Read the article

  • How to create location blocks in nginx for a single file but have it follow the rules of another location block in addition to it's own?

    - by Ryan Detzel
    I have a location block for / that does all of my fastcgi stuff and it has a normal timeout of 10s. I want to be able to have different timesouts for certain files(/admin, sitemap.xml). Is there an easy way to do this without copying the entire location block for each location? location /admin{ fastcgi_read_timeout 5m; #also use the location info below. } location /sitemap.xml{ fastcgi_read_timeout 5m; #also use the location info below. } location / { fastcgi_pass 127.0.0.1:8014; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; }

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • Cannot browse ipv4 websites (OpenVPN )

    - by user1494428
    I have set up an openVPN tunnel on my VPS (OpenVZ - Ubuntu 12.04). The problem is when I'm connected to the vpn, I can only browse websites which support ipv6 like google. Ipv4 sites aren't loading (no error, just waiting indefinitely). http://whatismyv6.com/ reports me that I've an ipv6 address, so I guess this is the problem. Server configuration: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem push "route 10.8.0.0 255.255.255.0" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" push "redirect-gateway def1" comp-lzo persist-tun persist-key status openvpn-status.log log /var/log/openvpn.log verb 3 Client configuration: client remote xx.xx.xx.xx 1194 dev tun comp-lzo ca ca.crt cert client1.crt key client1.key redirect-gateway def1 verb 3 I have configured NAT with this command: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to xx.xx.xx.xx Can someone explain me how I can make it works (forcing ipv4?) I had the same problem with another vps and I also tried on another client (All Windows 7).

    Read the article

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >