Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 1007/1180 | < Previous Page | 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014  | Next Page >

  • Not recognizing second monitor after hibernate (Windows 7, Dell D630 laptop)

    - by Brooks Moses
    I have a Dell Latitude D630 laptop which I've recently updated to Windows 7 64-bit. (The Dell site confirms that it's Windows-7-compatible.) Normally it lives in a docking station with a second monitor connected to the DVI port on the docking station, and I use the second monitor in a multi-monitor configuration with the laptop screen. Sometimes I undock the laptop and use it separately. Here's the problem: If I hibernate the laptop while undocked, and then power it back up in the docking station, it does not recognize the second monitor. By which I mean that not only does it not share the desktop onto the second monitor, but if I go into the control panel for display settings and press "Detect", it does not even detect the existence of the second monitor. I can tell it to "use the VGA port anyway" for a second monitor, but the monitor is connected to a DVI port on the docking station, so that doesn't do anything useful. If I entirely reboot the laptop while it's connected to the docking station, it has no problem recognizing the second monitor and using it. But then, if I hibernate, undock, de-hibernate while undocked and rehibernate, and then re-dock and de-hibernate, it's back to not recognizing the second monitor again. I'm reasonably certain that this is not a limitation of the hardware; this worked fine on Windows XP. I'm currently using the Windows 7 driver for my video card. I attempted to use the video driver from the Dell website for this laptop, but Dell only provides Vista 64-bit drivers, not Windows 7 64-bit drivers. Their "Windows 7 compatibility" page suggests that the Vista drivers should work, but when I attempted to install the driver, it gave me a "this operating system not supported" error and refused to install. Any further ideas?

    Read the article

  • itunes iphone sync stuck on "Waiting for items to copy"

    - by lindes
    superuser crowd, I've found a number of threads about this, but have yet to find a satisfactory answer. Perhaps I'll have better luck on this site?? I hope so... I'm having a problem with iTunes where syncing my iPhone ends up seeming to basically finish, but it says "Waiting for items to copy" after everything else, and just stays there for... well, a very long time, if I let it. Oh, and I'm not sure this is always the case, but at the moment, it's doing this while "Syncing Genius Data to [my iPhone] (Step 8 of 8)". If I click the little x in iTunes, it then gets stuck on "Canceling sync", for an equally indefinite sort of period. If I simply unplug the phone, everything seems to be synced and happy, but the next time I sync, it happens again. I presume this is a bug of some sort on Apple's part, but it seems like people have found workarounds... I'm just having trouble tracking one down that (a) is well described enough that I can actually follow it, (b) has enough detail that I can do it without losing data (i.e. tells me pathnames that I might want to copy first, or the like, before telling me to remove something) (note: see also point (a) -- I can't remove it if it's telling me to remove something that I don't know where it is!), and (c) otherwise seems sane. I'm hoping that here perhaps I'll get better luck -- with either a workaround, and/or debugging tips for figuring out how to find myself a workaround. Note: I'm busy with some other things at the moment, but can try to add some additional information later, if necessary.

    Read the article

  • Page reload needed several times before loading normally

    - by tim peterson
    Sorry my question is so vague I just have no idea where to start in solving it and am quite a novice with servers. Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only thing to my knowledge that I changed is that I installed mod_pagespeed https://developers.google.com/speed/pagespeed/mod. However, I have since turned it off by setting my pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. I'm wondering general advice on how to troubleshoot this problem. For instance are there linux commands to check page loading peformance? Also, it looks like I have lots of new error.logs in my /var/log/apache2 directory which i believe weren't there a few months ago. lots of this : error.log RewriteLog.log.24.gz ssl_access.log.40.gz error.log.1 RewriteLog.log.25.gz ssl_access.log.41.gz error.log.10.gz RewriteLog.log.26.gz ssl_access.log.42.gz error.log.11.gz RewriteLog.log.27.gz any thoughts? thank you, tim

    Read the article

  • Managing access to multiple linux system

    - by Swartz
    A searched for answers but have found nothing on here... Long story short: a non-profit organization is in dire need of modernizing its infrastructure. First thing is to find an alternatives to managing user accounts on a number of Linux hosts. We have 12 servers (both physical and virtual) and about 50 workstations. We have 500 potential users for these systems. The individual who built and maintained the systems over the years has retired. He wrote his own scripts to manage it all. It still works. No complaints there. However, a lot of the stuff is very manual and error-prone. Code is messy and after updates often needs to be tweaked. Worst part is there is little to no docs written. There are just a few ReadMe's and random notes which may or may not be relevant anymore. So maintenance has become a difficult task. Currently accounts are managed via /etc/passwd on each system. Updates are distributed via cron scripts to correct systems as accounts are added on the "main" server. Some users have to have access to all systems (like a sysadmin account), others need access to shared servers, while others may need access to workstations or only a subset of those. Is there a tool that can help us manage accounts that meets the following requirements? Preferably open source (i.e. free as budget is VERY limited) mainstream (i.e. maintained) preferably has LDAP integration or could be made to interface with LDAP or AD service for user authentication (will be needed in the near future to integrate accounts with other offices) user management (adding, expiring, removing, lockout, etc) allows to manage what systems (or group of systems) each user has access to - not all users are allowed on all systems support for user accounts that could have different homedirs and mounts available depending on what system they are logged into. For example sysadmin logged into "main" server has main://home/sysadmin/ as homedir and has all shared mounts sysadmin logged into staff workstations would have nas://user/s/sysadmin as homedir(different from above) and potentially limited set of mounts, a logged in client would have his/her homedir at different location and no shared mounts. If there is an easy management interface that would be awesome. And if this tool is cross-platform (Linux / MacOS / *nix), that will be a miracle! I have searched the web and so have found nothing suitable. We are open to any suggestions. Thank you. EDIT: This question has been incorrectly marked as a duplicate. The linked to answer only talks about having same homedirs on all systems, whereas we need to have different homedirs based on what system user is currently logged into(MULTIPLE homedirs). Also access needs to be granted only to some machinees not the whole lot. Mods, please understand the full extent of the problem instead of merely marking it as duplicate for points...

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • Maximum RAM on Biostar P4M8PM7 Socket775 mATX board

    - by Alex Balashov
    I have a server with a Biostar P4M8PM7 ("Pro-M7") board based on a VIA chipset. It's a strange board to put in a server because it seems like more of a desktop board to me, but alas! It takes DDR2-667 (PC5300) RAM. What I can't figure out is the maximum amount I can put in it, as I cannot find the manual anywhere online. I've found a few marketing broadsheets from online retailers that say, "up to 2 GB of RAM!" but I am not sure whether to believe them. They also do not seem to be quite for the same board, as they indicate DDR2 400/533 RAM, for example: http://www.geeks.com/details.asp?invtid=P4M8P-M7. The manufacturer's web site says the same thing, but does not elaborate. It's a 64-bit CPU and board; is there a technical reason why the board would not be able to address more than 2 GB? Can someone tell me what sort of that reason that would be? I bought this server from someone really hoping I could put 8 to 16 in it, and wanted to do some research before I gave up. On a related note, it's not indicated anywhere whether it can take ECC RAM; the existing chips are not ECC, but most memory sold in the range I'm looking for (e.g. DIMMs with enough chip density to do 8 GB) seems to be server-class and for that reason ECC. Any ideas? Thank you very much for your consideration in advance!

    Read the article

  • phpBB configuration problem under Nginx

    - by zvikico
    Hi, I have a phpBB site running with Nginx (PHP via FastCGI). It works OK. However, some specific actions like moving or deleting a topic fail. Instead, I'm redirected to the forum index. I think it is a problem with the URLs redirection or rewriting. My rewrite rule looks like this: if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } Any help would be appreciated. My full configuration file is: server { listen 80; server_name forum.xxxxx.com; access_log /xxxxx/access.log; error_log /xxxxx/error.log; location = / { root /xxxxx/phpBB3/; index index.php; } location / { root /xxxxx/phpBB3/; index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; error_page 403 /index.php; error_page 500 502 503 504 /index.php; # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /xxxxx/phpBB3/; break; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /xxxxx/phpBB3/$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; } }

    Read the article

  • Apache httpd Problem

    - by Christopher
    Hey, I am getting intermittent issues with my site. Pages often hang with huge loading times and sometimes fail to load. The httpd error logs contain the following: [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5871 for worker proxy:reverse [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5871 for (*) [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5872 for worker proxy:reverse [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5872 for (*) [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5954 for worker proxy:reverse [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized The server is currently running with 800mb free memory, so it is not caused by lack of RAM. The current number of httpd procceses is 11. This does increase as the error persists and can rise up to 25+. Also, I am running Apache/2.2.3 (CentOS). Any suggestions would be greatly appreciated. Many thanks, Chris.

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Firefox Won't Save My Google Cookies Between Restarts

    - by Tom Purl
    I'm using firefox 11 on Ubuntu. For some strange reason, Firefox won't save my google cookies between browser restarts. I have to log in to gmail every time I restart my browser, even if I click on the check box that tells Google to remember me. The strange thing is that Firefox does actually store some gmail cookies when I log in. It's just that those cookies disappear after restarting Firefox. The especially strange thing is that this only seems to happen with *.google.com url's. I haven't noticed this problem with any other site that I use. Please note that I tried to see if this was a plugin-related problem. I therefore started Firefox in safe mode and turned off all plugins. I then logged into Gmail and told it to remember me. I then shut down Firefox and started it the same way in safe mode. I got the same bad results. Has anyone else ever seen anything like this before? Is there a reason that Firefox seems to be blacklisting Google cookies?

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • mod_wsgi, .htaccess and rewriterule

    - by hadaraz
    I'm using several django projects running on the same apache instance through mod_wsgi, configured with virtualhost for each site, see the httpd.conf here. For one of the sites I want to use static-cache (staticgenerator), so I set up a directory with .htaccess file which contains: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}/index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] where 3456 is the django port on the server. Using this rewrite rule, the request is always forwarded to the mod_wsgi handler, even if the file or directory exists, and if the file index.html exists the request shows as request-path/index.html. I tried another setup: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond $1 !-d RewriteCond $1index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] but got almost the same results. All requests are transferred to the mod_wsgi handler, but the request path is now the original one. To sum it up: What is the correct RewriteCond to use here? How do you transfer a request to the mod_wsgi handler? Is it the right way? If that's not the way to do it, then how do you serve static files from a directory when they exist, and when they don't you serve from apache/mode_wsgi? Thanks for your help.

    Read the article

  • How can a CentOS 6 guest running in VirtualBox be configured as a LAMP server that can be accessed from the Windows host?

    - by jtt89
    I was able to conect Centos6 on Virtual Box to Windows (I can ping in both directions) with Host-only Adapter (for connection between the two) and NAT Adapter (to enable Linux on VB to connect to the Internet). I want to set up httpd, mysql and vsftpd servers and in the end easily connect to httpd from Windows based browser and ftp server with a Windows based client as well. I would also want to have access through SSH. I have a general idea of the steps that are involved, but there is also a configuration that I am not sure about at this point. Lets say I follow these steps: yum install httpd yum install php php-pear php-mysql yum install mysql-server mysql_secure_installation yum install vsftpd yum install mod_ssl Technically I have everything installed, but what would be the next steps that I need to take (from the networking point of view, so to speak) to get it all working)? I know I need to configure, at least Apache, and ftp server, but I am not sure how is it gonna work; like where am I gonna be uloading the sites (I know this can vary), how am I gonna know what address to use in a browser if I wanna go to a website x, y, z on that installation etc. This sounds like I need to do some kind of DNS setup and I am kind of stuck at this point. If somebody could give me a general outline of what are the things that need to be done that would be great (I was looking at a lot of websites and I know about etc/sysconfig/network, httpd.config - not too much about it on Apache's site, hostname, hostname -f etc; but it is kind of hard to piece it all together at this point). I am gonna be looking at the books also, but they not always reflect the setup that I have too (VirtualBox). Thank you.

    Read the article

  • Can one config LDAP to accept auth from ssh-agent instead of from Kerberos?

    - by Alex North-Keys
    [This question is not about getting your LDAP password to authenticate you for SSH logins. We have that working just fine, thank you :-) ] Let's suppose you're on a Linux network (Ubuntu 11.10, slapd 2.4.23), and you need to write a set of utilities that will use ldapmodify, ldapadd, ldapdelete, and so on. You don't have Kerberos, and don't want to deal with its timeouts (most users don't know how to get around this), quirks, etc. This resolves the question to one of where else to get credentials to feed to LDAP, probably through GSSAPI - which technically doesn't require Kerberos despite its dominance there - or something like it. However, nearly everyone seems to have an SSH agent program, complete with its key cache. I'd really like an ssh-add to be sufficient to allow passwordless LDAP command use. Does anyone know of a project working on using the SSH agent as the source of authentication to LDAP? It might be through an ssh-aware GSSAPI layer, or some other trick I haven't thought of. But it would be wonderful for making LDAP effortless. Assuming I haven't just utterly missed a way to use ldapmodify and kin without having to type my LDAP passwords - using -x is NOT acceptable. At my site, the LDAP server only accepts ldaps connections, and requires authentication for modifying operations. Those are requirements, of course. Any ideas would be greatly appreciated. :-)

    Read the article

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • Please insert a disk into SD/MMC - Vista problem [closed]

    - by Naunidh
    Possible Duplicate: Please insert a disk into SD/MMC - Vista problem Hi I tried pushing my 2GB micro SD card using the inbuilt card reader. On clicking the drive I get "Please insert a disk into SD/MMC". This problem is really frustrating. The card works fine on other computers so does the microSD to SD attachment. I have done following o fix. - Updated Vista and installed SP1. - Updated the TI drivers for FlashDrive. - Checked Vaio site for updates (none required). - Added a new entry HKLM\SYSTEM\ControlS* et001\Services\tifm21\Parameters/SDParam=1 took the hint from (http://tinyurl.com/nk33tp) I have restarted the PC multiple times. As soon as I put the card in, the SD/MMC device icon blips, so it seems the hardware is at least detecting something. The card reader was working fine few days back. I guess some windows update has broken something, does any one have any idea on how to proceed. MY laptop is VGN-N365E.

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • Pure-FTPD accounts and permissions for websites

    - by EddyR
    I'm having trouble setting up the appropriate Pure-FTPD accounts and permissions - I have the following sites setup up on my Debian server. /var/www/site1 /var/www/site2 /var/www/wordpress The permissions are 775 for folders and 664 for files. The owner is currently admin:ftpgroup Wordpress also requires special permissions for file uploads in /var/www/wordpress/wp-content/uploads What I need is: a general admin group with access to /var/www a group for each site (site1, site2, wordpress) and a group or user, not www-data (?), with permissions to write files to the wordpress upload folder I ask because restrictions on linux groups (can't have groups in groups) makes it a little bit confusing and also because many of the tutorial sites have conflicting information like, some recommend the use of www-data and some don't. Also, I'm not sure if I understand how Pure-FTP is supposed to work exactly. I create a Pure-FTPD account and assign it a directory (/var/www) and a system user (ftpuser) and group (ftpgroup): Can I assign more than 1 path? For example, if a user requires access to 2 sites. Is it better to assign ftpgroup to all ftp locations and let Pure-FTPD manage account access? Why would anyone have more than 1 ftpuser or ftpgroup? (Doesn't it mean users have access to everyone else's files if they could get there?) Sorry for so many questions at once. I've been reading lots of tutorials but I think they've ended up making me more confused!

    Read the article

< Previous Page | 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014  | Next Page >