Search Results

Search found 10695 results on 428 pages for 'none'.

Page 299/428 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Why isn't Apache Basic authentication working?

    - by Brad
    I just upgraded Apache from it's 2003 build, to a squeaky-clean, brand-new 2.4.1 build. All seems pretty good except for one glaring thing: In my httpd.conf file I have the following: <Directory /> AllowOverride none Options FollowSymLinks AuthType Basic AuthName "Enter Password" AuthUserFile /var/www/.htpasswd Require valid-user </Directory> This should allow only users in the specified auth file to access the server - just as it had under the older version of Apache. (Right?) However, it's not working. Requests are granted with no authentication provided. When I switch logging to LogLevel Debug, for the accesses, it says: [Sat Mar 24 21:32:00.585139 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of Require all granted: granted [Sat Mar 24 21:32:00.585446 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of <RequireAny>: granted I really don't know what this means - and I (to the best of my knowledge) don't have any "Require all granted" or "" statements in any of my files. Any ideas why this isn't working, or where to debug??

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • Connecting to Server 2008 shares fails

    - by Chris J
    I'm having problems getting a reliable share working on an x64 Server 2008 R1 SP1 server. All works well after a reboot, but after some time (within a day) the shares become unavailable to XP and Server 2003 servers. Interestingly, they remain available to other Server 2008 servers. On trying to access \\server\share, Server 2003 returns immediately and simply gives me the message "The specified network name is no longer available", XP takes a minute or two to timeout before giving the same message. There doesn't seem to be anything in the event logs indicating a problem. Doing some googling over the last day or two I've seen the following blamed: Bad network drivers ... I've updated to the latest drivers with no result Symantec anti-virus ... we're not using it (currently no AV on the server) Receive window auto-tuning ... I've disabled with netsh int tcp set global autotuninglevel=disabled and netsh int tcp set global rss=disabled None of these have had an effect. Windows Firewall is currently disabled. As other Server 2008 boxes (both x32 and x64) can connect, I can only assume that there's some new security configuration that's not quite right - or there's an AD issue that I need to trace, but don't know where to start. Even if anyone doesn't know how to resolve, if someone knows what I need to look for with Wireshark this would be a help.

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

  • Nginx load balancing and maintaining URLs

    - by Steve Klabnik
    I'm trying to use nginx as a load balancer, and it's working great. One problem, though. The load balancing box is at 123.123.123.123, and the backend box is 456.456.456.456. So I have this config: upstream backend { server 456.456.456.456; } server { listen 80; server_name 123.123.123.123; access_log off; error_log off; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend; } } This works great. I hit 123.123.123.123 in my browser, and the page comes up. But now the URL in the browser says http://456.456.456.456. Do I need to use a rewrite rule or something to keep the url correct? I don't want it to be different when going to different backed servers. None of the tutorials I've read have mentioned anything about this.

    Read the article

  • "Ethernet" doesn't have a valid IP configuration

    - by Xuzuno
    I'm using an ethernet cord to connect to my internet and it has been working well until Thursday morning when I turned on my laptop (Windows 8) to see a yellow triangle sign in the bottom right hand corner, in front of the ethernet connected symbol. Since then I haven't been able been able to access the internet from my computer. When I hover over it, it says that it is an "Unidentified network" and there is "No internet access". I've run the Windows 8 troubleshooting and it says that the problem found was ""Ethernet" doesn't have a valid IP configuration", but I'm unsure how to fix it. I'm thinking that the problem is to do with my computer rather than my network, because I've tried another laptop (Windows 7) through the same ethernet cable and connection and the internet works fine on the other laptop. I've tried so many fixes that I've found online, with none of them actually working. Yesterday I even tried a full system reset, where I re-installed Windows 8, re-partitioned and wiped everything off the hard drive, but it still appears have the exact same problem. Today I also purchased and tried a new ethernet cable which didn't work, so I then purchased a USB to Ethernet adapter, to make sure that it wasn't my ethernet port on my laptop that was faulty. That didn't work either, and the same problem still remains. I feel like I've tried everything, so can someone please help me?

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • Recieving a 500 Internal Server error after login success?

    - by Jeremy Quick
    I created my first member login form which takes the typical username and password and then sends it to the code below: checklogin.php: mysql_connect($db_host, $db_username, $db_password) or die(mysql_error()); mysql_select_db($db_database) or die(mysql_error()); $username=$_POST['username']; $password=$_POST['password']; $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM users WHERE username='$username' and password='$password'"; $result=mysql_query($sql); $count=mysql_num_rows($result); if($count == 1) //ERROR APPEARS TO TAKE PLACE HERE { session_start(); $_SESSION['username'] = $username; $_SESSION['password'] = $password; header('login_success.php'); } else { header("location:login_fail.php"); } If I type in the wrong information everything works properly so I know the error appears to be taking effect in the marked if statement. I have been searching the internet now looking for solutions but none seem to match mine or I am overlooking them. I've made a few changes which brought me to this point, before I was receiving deprecation warnings. Also, I have checked the logs and they are empty of errors relating to this.

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • PHP Page Stopped outputting content After Running "yum install php-devel" Command

    - by stwhite
    This error is bizarre but after running the "yum install php-devel" command (after a long day of trying to install Facedetect and OpenCV for face detection) my site stopped functioning. The site uses mysql and php. When you hit the url, the page executes the mysql and the php, but it appears to randomly stop outputting the content of the page. None of the code was changed and the site was working flawlessly prior to running the mentioned ssh command. I do use output buffering in the site, but after removing the calls "ob_flush", "ob_end_flush" and "ob_start" it didn't appear to help—still having issues with the site. Any ideas what this could be? Here is output from terminal: [myserver ~]# cd Facedetect-4b1dfe1 [myserver Facedetect-4b1dfe1]# phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 [myserver Facedetect-4b1dfe1]# configure bash: configure: command not found [myserver Facedetect-4b1dfe1]# phpize && configure && make && make install Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 bash: configure: command not found bash: Read: command not found [myserver Facedetect-4b1dfe1]# make make: *** No targets specified and no makefile found. Stop. [myserver Facedetect-4b1dfe1]# yum install php5-devel

    Read the article

  • USB drive was bootable, but no longer isn't

    - by i-g
    I'm trying to install a new OS onto a computer from a bootable USB stick. I previously installed Ubuntu Linux and it was a piece of cake -- I downloaded the ISO image, used UNetbootin to copy it to the USB drive and make it bootable, and that was that. Now, however, no matter what I try, I can't make the same USB drive bootable again! I've tried formatting it as FAT32 and NTFS. I've tried several different Linux distributions and Windows 7. I've tried using UNetbootin, Windows 7 USB Download Tool, WinToFlash, and manually making it bootable with diskpart/bootsect/bootrec. (Yes, I've tried bootsect /nt60 x: /force.) None of this seems to be working! When I try to boot from the drive, the machine reads from it (I can see the drive's LED blinking) and then gives me the same "Insert system disk and press Enter" message. (I've disabled booting from the hard drive.) Am I missing something I need to do to make the USB drive bootable again? The USB drive in question is a SanDisk Cruzer 8GB SDCZ6. The computer I'm working on is running Windows Vista SP1.

    Read the article

  • How to remotely connect using perfmon?

    - by user36914
    Suprised there is not a ton of information on google when i search for this but there is not. Lot of people asking the question but i none of them have any good answers. I have a remote computer running hyper-v (server) running a Windows 7 x64 guest (guest). Occasionally i won't be able to remote desktop to guest. I will then remote to server and see that the guest instance is constantly using about 25% of the cpu. WHen i try to connect directly from server i will get the login screen but as soon as i type the password in it will just stay at the windows 7 login screen but the account names will disappear and it will not log in. It responds to pings though. I don't know how else to diagnose other than trying to run perfmon remotely. It only happens like every 3 weeks and i run it 24/7. So i'm trying to run remote desktop remotely. I tested this out on a local vm i have running under vmware. When i try to connect using perfmon to my local vm i get this error: "when attempting to connect to the remote computer the4 following system error occurred: the network path was not found" I found in another past to start the remote registry service and when i start the service i get this error: "No such interface supported" Anyways, how do i remotely connect to another machine with perfmon or if anyone has a better idea how i can diagnose the problem above then let me know.

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • External Dell Display doesn't work with MacBook Pro (2011) after Thunderbolt Firmware Update (1.0 and 1.2)

    - by tom
    Today two Thunderbolt Firmware Updates (1.0 and 1.2) became available for my MacBook Pro (Early 2011). After installing both, my external monitor, a Dell U2713HM, does no longer work. The system detects the display, but the display shows only black. An Apple Thunderbolt display works fine and a MacBook Air can use the Dell monitor without problems. My MacBook Pro can use the Dell monitor just fine when I boot from a USB stick. Therefore, clearly the Thunderbolt Firmware Update seems to be the problem. Does anyone have the same problem? Any solutions or workarounds? I guess there is no way to remove a Thunderbolt Firmware Update once it's installed, right? Update 24.10.2013: Is there no one else with this problem? In the meantime I tried three different cables – none worked. My colleague with the same generation MacBook Pro also can't use my display after installing the firmware update. All colleagues with MacBook Airs and newer MacBook Pros (all didn't receive the firmware update) can use the display. Update 29.10.2013: Wow, ok today my new MacBook Pro Retina 13' (Late 2013) arrived. Guess what, I cannot use the display with it. Only HDMI works – not with the full resolution.

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Motherboard Dying? AHCI Drive Init and boot loop intermittent failure

    - by Adam Heath
    My computer is now intermittently failing to boot up. For the last couple of days, when I turn it on it hangs on "AHCI Drive Init...", and when powered off and on again, it booted up fine. Today, it did the same but failed in a few other ways too, seemingly at random: Hangs on "AHCI Drive Init..." Boot loop (after "AHCI Drive Init..." appears for a split second (no drives listed)) Black screen (after "AHCI Drive Init..." appears for a split second, a black screen with all fans still running) The interesting part is that the above is not affected by what drives are connected, or what to. I have tried both disks, each disk individually and no disks (along with trying the primary and secondary SATA controllers), none of this has any effect on what happens. After about 20+ attempts of different combinations, it suddenly decided it would boot up into Windows, and I hadn't touched anything for about 2 cycles. Motherboard: Gigabyte GA-870A-USB3 Processor: Amd Phoenom II x6 1090T RAM: 8GB Corsair 1600 Primary Disk: Plextor 128GB SSD Secondary Disk: Western Digital Black 1TB OS: Windows 8.1 Is this my motherboard dying? Or could something else be the cause? Thanks!

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • Recommendation for a simple no-frills Windows PDF printer driver?

    - by Scott Bussinger
    I'm looking for an extremely simple Windows PDF printer driver that I can recommend to clients. Ideally it would have these characteristics: When you print something, it should just create it as a temporary file and then display it in their default PDF viewer with no prompting. If they want to save it, they can save it manually from inside the viewer. This workflow should be with no special post-install configuration. Installation should be very simple. A double click the installation program and click "Finish" sort of thing. No complicated multi-step installation, no asking questions your grandmother wouldn't know the answer to (preferably no questions at all), no extra crap being installed. An option for a completely silent installation would be nice, but not necessary. Ideally it would be free to simplify their installing on a small network, but low cost is an option. I've tried a quite a few but none really fit the bill. Some can achieve the first goal but only after careful configuration, some try to install extra toolbars, some have other installation complexities that would make it hard for extremely novice users to succeed. Any suggestions? Thanks!

    Read the article

  • How do I restore tab-completion on shell variables on the bash command-line?

    - by Eric
    I've long set my most-recently visited directories to shell variables d1, d2, etc. On an ancient Fedora machine I could type a command like $ cp $d1/ and the shell would replace $d1 with text like /home/acctname/projects/blog/ and would then show me the contents of .../blog, like any tab-completion. Now, both ubuntu wheezy/sid and fedora 16 just -escape the '$', and naturally there are no completions to show. You can see this behavior in action in an OSX Terminal window. On 10.8, do something like ls $HOME/ to see what I mean. Is there a bash shell variable or option that can restore the old behavior? man bash suggests this is a bug: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. I get the above described completion when a token starts with '~' or a letter. It's just '$'-completion that's broken.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >