Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 752/1620 | < Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • Created new torrent but client unable to find seeds

    - by ehfeng
    I tried creating a new torrent, using uTorrent, following the directions from torrentfreak (http://torrentfreak.com/how-to-create-a-torrent/). I used a bunch of trackers (just in case any individual one was having troubles) and uTorrent shows it as "seeding" with 100% of the file. It shows: seeds 0(1), peers 0(0). http://exodus.desync.com/announce http://eztv.tracker.prq.to/announce http://open.tracker.thepiratebay.org/announce http://www.torrent-downloads.to:2710/announce http://denis.stalker.h3q.com:6969/announce udp://denis.stalker.h3q.com:6969/announce http://www.sumotracker.com/announce I've also attempted to remove the torrent and then re-add the torrent, pointing it to my already existing files, forcing uTorrent to recheck the files and then begin seeding. Yet when I share the .torrent file with my other computer and attempt to download, it is unable to find any peers or seeds. To confirm that both clients are working, I downloaded a torrent on both computers, using their respective clients, using torrents I found on isohunt. This tells me that there must be something wrong in how I'm creating my torrents. Any help is appreciated.

    Read the article

  • How to utilize 4TB HDD, which is showing up as 2.72TB

    - by mason
    I have two internal HDD's. They're both 4TB capacity. They're both formatted with the GPT partitioning scheme, and they're Basic Discs (not dynamic). I'm on Windows 8 64bit. I have UEFI, not BIOS. When I view the discs in Computer Management MMC with Disk Management, they show that each partition is formatted as NTFS and takes up the entire drive. And it shows that each drive has a capacity of 3725.90GB in the bottom section of Disk Management, but 2.794.39GB in the top section. When I view the discs in "My Computer"/"This PC" they only show up as 2.72TB, which matches the amount capacity I'm getting from some other 3TB HDD's I have. Why are they showing up as only 2.72GB? Will I be able to use the full 4TB capacity? Also of note, although I'm not sure it's relevant: I often get corrupted files on these two HDD's. None of my other HDD's give me corrupted files. Usually the problem is fixed by running chkdsk /f on the drives, but it's extremely annoying. In the picture below, it's the X: and Y: drives. Steps I've tried Flashed latest BIOS (MSI J.90 to K.30)

    Read the article

  • Mplayer no sound when playing some movies

    - by Ivan Peevski
    Ok, that's a bit of a strange problem, that somehow crept into my system. It used to work fine. Here is the problem as far as I can identify it. When I try to play certain video files with mplayer, there is no sound. As far as I can tell, it is only an issue with ac3 and dts sound tracks (using the ffmpeg decoder). Mplayer says: ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 48000 Hz, 6 ch, s16le, 1536.0 kbit/33.33% (ratio: 192000->576000) Selected audio codec: [ffdca] afm: ffmpeg (FFmpeg DTS) ========================================================================== [AO_ALSA] Playback open error: Device or resource busy Failed to initialize audio driver 'alsa' Could not open/initialize audio device -> no sound. Audio: no sound (similar with ac3 sound, but using the ffac3 audio codec). Trying different audio output (-ao oss/pcm/sdl) doesn't fix the problem. The strange thing is that if I play these files directly with ffplay, they work fine. mplayer sound with mp3/ogg is fine My alsa configuration is standard (no /etc/asound.conf or ~/.asound*) OS: Linux Gentoo Mplayer: 1.0_rc4_p20100213 (SVN-r30554-4.3.4) FFMpeg: 0.5_p20601-r1 (SVN-r20601) Any other information I can provide?

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Is my HDD dead forever?

    - by Roberto
    Yesterday I turned on my computer and it couldn't boot. I found out the hd (320GB SATA Seagate Momentus 7200.3 for notebook) was broken and it couldn't be recognized by the BIOS. I have another of the same hard drive, so I exchanged the boards. I found out that there is a problem on its board since my good hard drive didn't work. But the broken hard drive doesn't work with the good board as well: it can be recognized but when I insert a Windows Instalation DVD it says the hard drive is 0GB. I put it in a case and use it in another computer via USB, and but it doesn't show up in the "My Computer". I used a software to recover files called "GetDataBack for NTFS", it recognized the hard drive but with the wrong size (2TB). I try to make it read the hard drive but it got an I/O error reading sector. It tries to read, the hard drive spins up. So, since I'm using a good board on it, the problem seems to be internal. Is there anything someone could do to recover the files from it?

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Munin 2 data not showing up on graph

    - by letronje
    I have a fresh installation of Munin 2.0.1 on my Ubuntu 12.04 and the first time I tried to view graphs, it showed them properly(After installation, I had to follow http://munin-monitoring.org/wiki/CgiHowto2 to set it up) After that, the graphs show up, but with with just one data point(single vertical line) as if no data is being collected after I tried it for the first time. In Munin 1.4, there was munin-cron which was run every 5 minutes and I saw new data being plotted in the graph atleast every 5 minutes. But If there is no cron job in v2, How does data collection work with Munin2 ? Is the data collected when the graphs are requested ? The file timestamps in /var/lib/munin have not changed after the first time I tried the graphs. But i do see munin-node process running(restarted in several times). I also see no errors in the munin node log files or apache2 log files. Any idea what could be wrong ? Screenshot : http://i.imgur.com/uzuAK.png Also, is there a way to pre-create graphs instead of doing it dynamically, on the fly ?

    Read the article

  • Postgresql base backup script

    - by Terry Lorber
    I'm using the following script to do a file-level backup of Postgresql. I sometimes see that the last part, to do cleanup after "pgs_backup_stop" is called, hangs while it waits for the last WAL to be created. The REF_FILE to search for is sometimes wrong. I'm also shipping these files to a different machine, every 5 minutes via rsync. What do other people do to safely remove old WAL files? #!/bin/bash PGDATA=/usr/local/pgsql/data WAL_ARCHIVE=/usr/local/pgsql/archives PGBACKUP=/usr/local/pgsqlbackup PSQL=/usr/local/pgsql/bin/psql today=`date +%Y%m%d-%H%M%S` label=base_backup_${today} echo "Executing pg_start_backup with label $label in server ... " CP=`$PSQL -q -Upostgres -d template1 -c "SELECT pg_start_backup('$label');" -P tuples_only -P format=unaligned` RVAL=$? echo "Begin CheckPoint is $CP" if [ ${RVAL} -ne 0 ] then echo "PSQL pg_start_backup failed" exit 1; fi echo "pg_start_backup executed successfully" echo "TAR begins ... " pushd $PGBACKUP tar -cjf pgdata-$today.tar.bz2 --exclude='pg_xlog' $PGDATA/* popd echo "TAR completed" echo "Executing pg_stop_backup in server ... " $PSQL -Upostgres template1 -c "SELECT pg_stop_backup();" if [ $? -ne 0 ] then echo "PSQL pg_stop_backup failed" exit 1; fi echo "pg_stop_backup done successfully" TO_SEARCH="*${CP:0:2}000000${CP:3:2}.00${CP:5}" echo "Check for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" while [ ! -e ${WAL_ARCHIVE}/${TO_SEARCH}.backup ]; do echo "Waiting for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" sleep 1 done REF_FILE="`echo ${WAL_ARCHIVE}/*${CP:0:2}000000${CP:3:2}`" echo "Reference file ${REF_FILE}" # "-not -newer" or "\! -newer" will also return REF_FILE # so you have to grep it out and use xargs; otherwise you # could also use the -delete action find ${WAL_ARCHIVE} -not -newer ${REF_FILE} -type f | grep -v "^${REF_FILE}$" | xargs rm -f REF_FILE="`echo ${PGBACKUP}/pgdata-$today.tar.bz2`" echo "Reference file ${REF_FILE}" find $PGBACKUP -not -newer ${REF_FILE} -type f -name pgdata* | grep -v "^${REF_FILE}$" | xargs rm -f

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /mychosendir/cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • win8: access denied to external USB disk; update access rights fails

    - by Gerard
    I use to work with 2 laptops (vista and win7), my work being files on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take win8. 1/ I suspect something changed with access rights, as my external disk suffered some "access denied" problem on win8. I was prompted (by win8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying "disk is not ready". Additonnally, the usb somehow was not recognized anymore. 2/ Back to win7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them i could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under win7, all the folder showed with a lock. 3/ Then back again to win8. Same problem : access denied to my disk + no way to change access rights as it gets stuck "disk is not ready". Now I am pretty sure there is some kind of bug or inconsistence in win8 / win7. I did 2/ and 3/ a few times. At some point, I also got an access denied in win7. I could restore access rigths to the disk to "system" (properties - security - EDIT for full control to group "system" ...). But then I still get the same access right pb on win8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. Now, after I tried for more than 3 days, I am losing my patience with that bloody win8 which I did not want to buy but had no choice. I upgraded win8 with the windows updates available. Does not help. Anybody can help me ?

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Why does RoboCopy create a hidden system folder?

    - by Svish
    I thought I would try out RoboCopy for mirroring the contents of a folder to another harddrive. And seems like it worked. But, for some reason, to see the destination folder I have to both enable Show hidden files, folders and drives and disable Hide protected operating system files. Why is this? Both the source and destination folder was initially both visible and normal directories. When I open up the properties for that destination folder, the Hidden attribute is even disabled. What is going on here? Is it because I ran it in an administrator command prompt? Or is it an issue with my choice of modifiers? Or does robocopy really just work this way? robocopy E: I:\E /COPYALL /E /R:0 /MIR /B /ETA Update: Tried to copy another drive to another folder, and I got the same thing happening there. But when I try to just copy a folder to a different folder, then the destination folder stays normal. Could it be because I copy a drive? If so, how can I prevent this from happening? Cause I really do want to copy the whole drive...

    Read the article

< Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >