Search Results

Search found 33151 results on 1327 pages for 'www browser'.

Page 390/1327 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • USB forwarding from dom0 to domU

    - by Karolis T.
    What are my options to forward two USB connected phones to xen guest? I've read about PCI-passthrough http://www.wlug.org.nz/XenPciPassthrough, but I'm sure usb controller in the server isn't a pci card. There's device level forwarding, but I need to forward two devices, this here doesn't say how to do it: http://www.olivetalks.com/2008/02/03/usb-forwarding-on-xen-it-just-does-not-work/ Would something as simple as: usbdevice = [ 'host:xxx', 'host:yyy', ] work? EDIT: I'm now starting a bounty. This is really important for me and for other people also, hoping someone who have this resolved will be able to help.

    Read the article

  • Overriding Apache auth directive

    - by Machine
    Hi! I'm trying to allow public access to a method that generates a WSDL-file for our API. The rest of the site is behind basic auth protection. Can you guys take a look at the following virtual-host configuration and see why the override does not take place? <VirtualHost *:80> ServerName xyz.mydomain.com DocumentRoot /var/www/dev/public <Directory /var/www/dev/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all SetEnv APPLICATION_ENV testing </Directory> <Location /> AuthName "XYZ Development Server" AuthType Basic AuthUserFile /etc/apache2/xyz.passwd Require valid-user </Location> <Location /api/soap/wsdl> Satisfy Any allow from all </Location> </VirtualHost>

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Apache Virtual Host Configuration

    - by Carl
    I have been searching the internet for an hour now, and I was hoping for a quick hint here so that I could solve my problem a wee bit faster. My virtual server is so far only accessible through an IP address, no DNS entry yet, and so far none needed either. The problem I have is with Apache2, the virtual hosts are puzzling me. What I need is: Access to my project (based on Symfony2) from the outside with the IP address Access to my project from localhost What I have got: Access from the outside results in rendering the websites in /var/www/vhosts/htdocs/default, while from the inside results in rendering the websites in /var/www. Why the difference? What is a recommended configuration for my use case?

    Read the article

  • ISP issue browsing "sonos.com" - need to diagnose and prove [closed]

    - by john
    I am unable to browse to a website "sonos.com" with my ISP (virgin). I have ruled out browsers, PCs, macs, routers, wifi, etc. Other ISPs (even other virgin connections in different areas!) supply this site no problem. I am 99% convinced there is a DNS issue lurking here. There is something fishy about the DNS for the site : What I notice is that online DNS sites tell me the right IP address for "sonos.com", but not for "www.sonos.com". Anyway when I type "sonos.com" the browser (any/all of the 4 I tried) fail to display the page. Firefox gives a "connection was reset" error. If I browse to sonos.com using the IP address it works OK. Browsing to www.sonos.com or sonos.com works fine with other ISPs of course. Questions: 1 Does anyone have any idea what might be going on here? 2 Any suggestions as to tools/monitors to help investigate/prove what is going on I can then take this up with virgin and/or sonos. Thanks

    Read the article

  • Troubleshooting Internet Explorer 7.0 Issues

    Introduction: Internet Explorer 7 (IE7) is light years ahead of its predecessors, but by no means does that proclamation mean that the browser is perfect. You are still going to encounter issues wit... [Author: SalemHassan - Computers and Internet - September 03, 2009]

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • mod_rewrite issue | Request exceeded the limit of 10 internal redirects

    - by Chris Anarko Meow
    ok what Im doing normally works but since my rule "includes" itself is giving me issues and can't find a solution after hours working on different options. I have a .htaccess with: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} !^/3.15.0/(.*) RewriteRule ^(.*)$ /3.15.0/$1 [L] this is for my software versions, I have a program that can request sometimes versions that are updated and in the server may be behind a couple version so I want to be able to say that whatever is comming in forward to the latest version that in this example is 3.15.0 /var/www/nameblabla/3.15.0 my .htaccess is on /var/www/nameblabla/.htaccess so the first Condition is to ignore request that already has the right path and version.. the second should be to grab all request and forward to 3.15.0... and of course not loose the path to the files I want from inside that should be the same. so far I can only get it to redirect to such directory but will loose the path and others I get the "Request exceeded the limit of 10 internal redirects" I guess this is because Im including the 3.15.0 path Any help or another way to do this with out mod_rewrite?

    Read the article

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side. EDIT: In the mean time, here is what has changed, on top of what is mentioned just above. • My files went from 0MB to 71.9MB, and is still rising. • My first folder of 400.2MB is being filled with the data as I write this. The second folder has the folder sub-structure in place. • Both folders now show in the File Browser that it will be synchronized. I believe that right now, it is all back to normal and working fine, and I guess that's what a good night's sleep can do ;). And we're now only back to the point where synchronizing is slow, but will pick up with the release of Natty (https://wiki.ubuntu.com/UbuntuOne/FAQ/WhyIsItTakingSoLongForMyFilesToSync). But to get to the questions: My about says I use 11.04, Natty Narwhal, but I am quite sure the last distribution I installed was 10.10. Folder A is 400.2MB, and Folder B is 29.5MB I am on a DSL line, behind a regular fritz.box setting. No proxy servers in use, and I did not install any particular firewall features. No physical firewall, just the router (on which I have a TV signal as well), and 2 switches to get to this floor. Status: inactive The ubuntuone-indicator runs the same window as when I click on my name on the top-right corner and select Ubuntu one, or in the Control Center choose Ubuntu one. It wasn't supposed to go further than this was it?

    Read the article

  • Rsync and Windows 7

    - by Nate
    Can someone give me any tips on setting up some sort of Rsync server/client on Windows 7 to run rsync between both my web hosting server, and a backup server that I have running Ubuntu? I've tried setting it up with this tutorial: http://www.youtube.com/watch?v=CvwdkZLNtnA Using copssh, and cwrsync. Ran into all sorts of troubles, including not being able to get cwrsync to run (it installs properly, but never starts up), and copssh not generating the keys at all. The guy was running Windows Server 2003, though, so I'm guessing the problems could just be because I'm running Windows 7. I've been trying to set it up with my Windows machine being the rsync server, and then Ubuntu and my webhosting VPS as the clients, but I realize it may be easier (and make more sense) to just setup the rsync server on Ubuntu, and then an rsync client on Windows 7? Can anyone point me in the right direction? I'm thinking of using this guide: http://www.gaztronics.net/rsync.php It seems a bit outdated, though.

    Read the article

  • Authentication issues setting up iRedMail on Debian

    - by Sergio Rinaudo
    I'm setting up an exchange server using iRedMail. Following the official iRedMail installation guide (http://www.iredmail.org/install_iredmail_on_debian.html) and the Digital Ocean guide (https://www.digitalocean.com/community/articles/how-to-install-iredmail-on-ubuntu-12-04-x64) I was able to install iRedMail without any problems, so I have all the services up and running. I can configure domains and emails using iRedAdmin BUT I have problem both sending and receiving email, what I get from Roundcube is 'Authentication error' when trying to send an email. Also I can't receive anything. I also tried to connect to the mx server using telnet, it connects, but after the STARTTLS command, when I start to write "MAIL FROM:" the connection is lost. Something in the configuration is not working (at the moment I have the configuration written by the iRedMail installation) but I do not know where, I hope someone can enlight me! Thank you

    Read the article

  • Desktop Fun: Mountain Travel Wallpaper Collection

    - by Asian Angel
    Traveling in the mountains can be an invigorating experience whether you are climbing to a specific height or going on an extended journey across them to the other side. Start your own epic journey to the heights of beauty on your desktop with our Mountain Travel Wallpaper collection. How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    Read the article

  • Lighttpd mod_rewrite conversion from .htaccess format

    - by hoball
    Hello, I am using lighttpd as webserver and is having an issue about mod_rewrite. Currently I have a set of Apache .htaccess rewrite rules from a PHP script: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.*)$ index.php [QSA,L] In my understanding, if the requested URI is not a file/directory/sym-link, append it to index.php eg. www.a.com/hello/world --> www.a.com/index.php/hello/world I attempted to convert to lighttpd specification: url.rewrite-if-not-file = ( "^(.*)$" = "index.php/$1" ) However, it doesn't work. I suspect that is due to misuse of $1. I tried to use $0/%0 or something else but they fail. Would you please provide me a hint on making the syntax work? Thank you!

    Read the article

  • Logfile software for making querys, extracting and other operations

    - by Juw
    I have written an app that connects to a server IIS 6 to retrieve information. When doing this i have collected data (phone model etc) and send it to the server with a regular GET HTTP call like this: http://www.myserver.com/getData.php?phonemodel=userphone&appversion=2&id=20 This is logged in the IIS logfiles. I thought of writing my own parser for log files. But why invent the wheel? I´m looking for a software that can read the IIS 6 logfiles. I would like it to be able to do: Extraction - Extract all lines that contains: www.myserver.com/getData Filtering - View all lines where http-code is not 200 Queries - View all lines where phonemodel=iphone Any tips on free software that can help me with this? Thanx in advance!

    Read the article

  • configuring cgi-bin using .htaccess

    - by Alexandru
    I'm trying to configure a directory as cgi-bin using .htaccess, but when I try to access the executables, the files are downloaded. I'm using apache2.2. What is the problem? My .htaccess looks like: # cat www/cgi-bin/.htaccess Options +ExecCGI AddHandler cgi-script cgi pl File permissions are # ls -1la www/cgi-bin/ total 60 drwxr-xr-x 2 root root 4096 iun 10 19:22 . drwxr-xr-x 5 root root 4096 iun 10 19:18 .. -rw-r--r-- 1 root root 46 iun 10 19:23 .htaccess -rwxr-xr-x 1 root root 15358 iun 10 19:23 paperload.cgi -rwxr-xr-x 1 root root 12728 iun 10 19:23 papers.cgi -rwxr-xr-x 1 root root 12593 iun 10 19:23 paperview.cgi

    Read the article

  • How to solve virtual host issue

    - by Webnet
    I have multiple sites all setup the same as below except "bk" has something else in it's place... NameVirtualHost *:80 <VirtualHost bk:80> ServerName bk DocumentRoot /var/www/bk.com/ </VirtualHost> and I get these errors when restarting apache: [Mon Jan 17 10:28:56 2011] [error] VirtualHost bk:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jan 17 10:28:56 2011] [warn] NameVirtualHost bk:80 has no VirtualHosts I don't get it... the other 2 sites I have virtual host configurations for this exact same way don't throw any errors update One error message fixed - here's where I'm at now.. <VirtualHost bk:80> ServerName bk DocumentRoot /var/www/bk.com/ </VirtualHost> [Mon Jan 17 10:28:56 2011] [error] VirtualHost bk:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results

    Read the article

  • How to install PyQt on Mac OS X 10.6

    - by Albert
    I want to install PyQt. This seems kind of complicated to install on OS X. I haven't found any precompiled packages of it (are there any? I would really prefer those). So I downloaded PyQt. And SIP, because it depends on that. These files: http://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-mac-gpl-4.7.3.tar.gz http://www.riverbankcomputing.co.uk/static/Downloads/sip4/sip-4.10.2.tar.gz Did a python configure.py && make && sudo make install on SIP -- installed without any problems. Tried the same on PyQt -- and failed of course: /Library/Frameworks/QtCore.framework/Headers/qglobal.h:288:2: error: #error "You are building a 64-bit application, but using a 32-bit version of Qt. Check your build configuration." Ok, so I tried with python configure.py --use-arch=i386. Same error. Any idea?

    Read the article

  • Server speed: sharing one script.php or using many copies the same script.php

    - by Marco Demaio
    Let's assume: I have thousands of domains on same Apache server. Each domain is in a folder under server public_html document folder, so it can be accessed by calling "www.somedomain.com" or by calling "www.serverdomain.com/somedomain_folder" In each domain there is a website who needs a certain script.php (identical for each domain). From a coding point view, its obvious that it's better to use a unique script.php, so when i update it with new features/bug fixes etc, I need to update on server only one file and it will work for all domains. But from a server point of view? If i use a unique script all domains will access it at the same time, will the server run slower compared to the situation where each domain called its own script?

    Read the article

  • How to Disable Compatibility Mode in Internet Explorer

    - by Taylor Gibb
    Compatibility mode in IE is a feature that helps you view webpages that were designed for previous versions of the browser, however having it enabled can break newer sites that were designed for modern browsers. Here’s how to disable it and make sure it only runs for older sites. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Application outside document root in Apache/CentOS

    - by liz
    I have a PHP application running in Apache on CentOS 6. The document root is pointed to a specific app folder: /var/www/my-project/app I'm trying to get phpMyAdmin running on the same server but I don't want to put it in the application folder. Instead I'd like to put it here /var/www/apps/phpmyadmin I'm using a sub domain for the server. What's the easiest way for me to get access to phpMyAdmin? Another subdomain? sub subdomain? Re-direct a folder?

    Read the article

  • Troubleshooting a slow database server with no load

    - by user1721724
    I'm getting ready to soft launch my website and I've run into some problems with what I think is being caused by my MySQL database running on Fedora. All websites run fine, just as I'd expect, but any pages that establish a database connection hang until the connection is established, and then bang, the site loads as it should. Ex. My landing page (http://www.thrusong.com) doesn't make a database connection and loads quickly. User profile pages (http://www.thrusong.com/john) make a database connection and load slowly, even though most of the data comes from memcached and the database currently has no load on it. This problem just came up yesterday when my router died and I began using my Pace 2Wire modem with built-in router. Before, my old router was set to handle everything. My ISP says the settings in the modem are correct. Any ideas? Thanks in advance.

    Read the article

  • Will Beej's Guide to Network programming point me the right way to be able to make multiplayer games and a web broswer?

    - by Logan545
    I'm new to socket programming in C, and I've found the Beej's Guide to Networking programming. It looks fine and all, however, I just wanted to ask whether this tutorial will point me in the right direction in terms of network programming. I plan to build a game in opengl that will be multiplayer using c+ and possibly a web browser. I know this tutorial would by no means teach me how to do this, but would this be a good way to start off on my path?

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >