Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 570/1833 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Windows 2003 R2 x86 Mini dump fails to write to disk

    - by Randy K
    I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files. I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory." I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible. Can anyone point me where to go from here?

    Read the article

  • Install MatroskaProp on Windows 7 x64

    - by Neophytos
    To see more information in Windows Explorer property pages and menus about Matroska Video (.mkv) files, similar to what one can see when selecting native Windows media (.avi, .asf, .wmv or even just plain old mpg) files, Matroska links (from http://www.matroska.org/downloads/windows.html) to a download of the MatroskaProp shell extension (http://www.jory.info/serendipity/archives/14-MatroskaProp-2.8-Released.html). It used to work for me under Windows XP 32-bit. Now I have Windows 7 x64, and downloaded, installed and ran it. Configuration and settings page is fine. But it does not seem to actually register any shell extension. Nothing is added to Explorer windows, menus or property pages when selecting .mkv or .mks files). I tried calling the register hook manually using regsvr32.dll, that again invoked the configuration window and let me set all options, and when confirming even said the registration succeeded, but seems to have had no effect. In the registry I cannot find any traces of the shell extension being installed. Can this extension be made to work under Windows 7 or x64 systems? Are there known problems with installing this or other old shell extensions on x64, or on Windows 7?

    Read the article

  • Re: How can Django/WSGI and PHP share / on Apache?

    - by Bogdan
    in response to: How can Django/WSGI and PHP share / on Apache? Hello, could you please post the complete config file from /sites-available I am having a problem seems like rewrite engine redirects all requests to django, so static and php files are not served and instead i see the django 404 page. If I get rid of rewrite rule then static files and php works. here is my apache config file from /sites-available <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/django <Directory /> Options +FollowSymLinks ExecCGI Indexes AllowOverride None DirectoryIndex index.php AddHandler wsgi-script .wsgi </Directory> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /mysite.wsgi/$1 [QSA,PT,L] ~ and my .wsgi file: import site site.addsitedir('/home/user/.virtualenvs/url.com/lib/python2.6/site-packages') import os, sys path = '/home/www/django' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' sys.path.append(path + '/mysite') import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() import posixpath def application(environ, start_response): # Wrapper to set SCRIPT_NAME to actual mount point. environ['SCRIPT_NAME'] = posixpath.dirname(environ['SCRIPT_NAME']) if environ['SCRIPT_NAME'] == '/': environ['SCRIPT_NAME'] = '' return _application(environ, start_response) the document root directory on disk (/home/www/django) contains php files, images, and the mysite.wsgi file.. thanks for your help

    Read the article

  • Preventing Windows version of Vim from destroying other file systems permissions

    - by dborba
    I am currently using the windows version of gVim to edit source files on a networked drive mapped to a linux system, as well as local files created in cygwin. The problem is that the windows version of gVim destroys the original file permissions on the respective systems. IE: Files on cygwin are defaulted to 077. When edited by the windows version of vim they are saved as 777.This problem doesn't even occur when using ms-notepad (as well as all other editors I've tried), so I am not quite sure why gVim does it. A possible solution would be to use cygwin's gVim for everything, but that's rather cumbersome as it requires running an x11 environment to support it, and it causes some problems when running some commands from within gVim (or vim for that matter) when working on the networked drive. Any ideas how I might be able to maintain the existing file permissions? Edit: This morning while on a different machine the problem with cygwin did not occur. Cygwin & gVim were the same version, however the other machine is running WinXP while the machine the problem is occurring on runs Win7.

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • How to back up non-standard directories in my user profile with Windows Backup?

    - by James Johnston
    I'm using Windows Backup to back up my Win7 Pro laptop. I'd like to use it to back up my complete user profile, but I only see standard profile directories (e.g. C:\Users\JohnstonJ\Documents) in the list. Non-standard ones aren't there (e.g. C:\Users\JohnstonJ\MyCustomDirectory). What's the best way to handle this? The only thing I can think of is to browse under the "Computer" entry and navigate directly to C:\Users\JohnstonJ and check off the entire profile (to get what's in there, and any new directories that come up). But is that going to back up the profile twice? Cause other unforeseen problems given that I checked it off by navigating through the computer, rather than picking it under the "Data Files" category? (e.g. back up temporary file garbage, files in use problems, etc. that the "Data Files" category might be handling better). Looking for solutions that other people use that are known to work well and still uses the Windows Backup software - I don't really want to fuss with 3rd-party backup software. Example - as you can see, I have two directories in my profile that Windows Backup is not offering to back up: "Dropbox" and "New folder": (Link to images album because I don't have enough reputation to directly embed them: http://imgur.com/a/Xyv5u)

    Read the article

  • IIS serving static content gives 503 at random

    - by Steffen
    We're having a few issues with our image server. It's a Win 2008 running IIS 7.5 and it only serves static content: images. It has run without issues for quite a while, until recently when we disabled Output Caching, as we noticed having it enabled meant it sent no-cache host-headers to the clients (forcing them to fetch the images from the server every time) We've read quite a bit about it, and it seems IIS just works that way - either you use Output Caching or you get to use cache host-headers. Anyway having disabled the Output Cache, we now experience random 5 minutes intervals, where all requests just get a 503 Service Unavailable. During this period the "Files cached" performance counter staggers (neither increased nor decreased) and after the period all caches are flushed. You might find it weird I talk about caching, since we disabled Output Caching. The thing is we changed the ObjectTTL parameter in registry, so we cache files for 3 minutes (which has worked very well, our Disk I/O dropped significantly) So even with Output Caching disabled, we're still caching plenty of files - if we could just get rid of the random 503 it'd be perfect :-D We don't get any messages in the Windows event log during these 503 intervals, so we're pretty stumped as to what to do. Any ideas are very welcome :-)

    Read the article

  • Mystery undeletable file

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf. it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Pure-FTPD accounts and permissions for websites

    - by EddyR
    I'm having trouble setting up the appropriate Pure-FTPD accounts and permissions - I have the following sites setup up on my Debian server. /var/www/site1 /var/www/site2 /var/www/wordpress The permissions are 775 for folders and 664 for files. The owner is currently admin:ftpgroup Wordpress also requires special permissions for file uploads in /var/www/wordpress/wp-content/uploads What I need is: a general admin group with access to /var/www a group for each site (site1, site2, wordpress) and a group or user, not www-data (?), with permissions to write files to the wordpress upload folder I ask because restrictions on linux groups (can't have groups in groups) makes it a little bit confusing and also because many of the tutorial sites have conflicting information like, some recommend the use of www-data and some don't. Also, I'm not sure if I understand how Pure-FTP is supposed to work exactly. I create a Pure-FTPD account and assign it a directory (/var/www) and a system user (ftpuser) and group (ftpgroup): Can I assign more than 1 path? For example, if a user requires access to 2 sites. Is it better to assign ftpgroup to all ftp locations and let Pure-FTPD manage account access? Why would anyone have more than 1 ftpuser or ftpgroup? (Doesn't it mean users have access to everyone else's files if they could get there?) Sorry for so many questions at once. I've been reading lots of tutorials but I think they've ended up making me more confused!

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • Apache to read from /home/user/public_html on CentOS 5.7

    - by C.S.Putra
    this is my first experience using CentOS 5.7 / Linux as my web server OS and I have just finished installing Apache. Then I created a new account using WHM. The account is now created and the domain name can be accessed. I have put the web files under /home/user/public_html/ but when I access the domain assigned for that user which I assigned when creating new account in WHM, it doesn't read the files. In /usr/local/apache/conf/httpd.conf : <VirtualHost 175.103.48.66:80> ServerName domain.com ServerAlias www.domain.com DocumentRoot /home/user/public_html ServerAdmin [email protected] User veevou # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup group1 group1 </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup group1 group1 </IfModule> CustomLog /usr/local/apache/domlogs/domain.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/domain.com combined ScriptAlias /cgi-bin/ /home/user/public_html/cgi-bin/ </VirtualHost> Instead of reading from /home/user/public_html/ apache will read the /var/ww/html/ folder. How to set the apache so that when user access www.domain.com, they will access the files under /home/user/public_html/ ? Please advice. Thanks

    Read the article

  • Windows DFS - file locking & replication?

    - by Adam Salkin
    I'm in a small company that has offices on the east and west coasts of America and also various people working from their homes. There are Windows Servers already in the offices. I think that Microsoft Windows DFS will do what I want, but despite reading the web site, I'm really not sure, so I'm hoping that someone can confirm if it will do all the following: (For various personnel / political reasons I know that a proposal for a Microsoft Windows system has more chance of being accepted than any *nix system) Creation of a Folder so that any files in this folder will automatically be available on the servers in all the offices. When anyone opens up one of these shared files on any of servers, the copies on all the servers will automatically be locked. And when they close the file, the updates automatically get copied to the file on all the servers. VPN access to these folders for people working outside the offices. Bandwidth at the main offices varies from 6 Mb/s to 20Mb/s. Files are Excel / Word / AutoCAD ranging in size from 100KB to 4MB. Thank you.

    Read the article

  • MySQL replication - rapidly growing relay bin logs

    - by Rob Forrest
    Morning all, I've got a really strange situation here this morning much like a reportedly fixed MySQL bug. http://bugs.mysql.com/bug.php?id=28421 My relay bin logs are rapidly filling with an infinite loop of junk made of this sort of thing. #121018 5:40:04 server id 101 end_log_pos 15598207 #Append_block: file_id: 2244 block_len: 8192 # at 15598352 #121018 5:40:04 server id 101 end_log_pos 15606422 #Append_block: file_id: 2244 block_len: 8192 # at 15606567 ... # at 7163731 #121018 5:38:39 server id 101 end_log_pos 7171801 #Append_block: file_id: 2243 block_len: 8192 WARNING: Ignoring Append_block as there is no Create_file event for file_id: 2243 # at 7171946 #121018 5:38:39 server id 101 end_log_pos 7180016 #Append_block: file_id: 2243 block_len: 8192 WARNING: Ignoring Append_block as there is no Create_file event for file_id: 2243 These log files grow to 1Gb within about a minute before rotating and starting again. These big files are interspersed with 1 or 2 smaller files with just this in /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #121023 9:43:05 server id 100 end_log_pos 106 Start: binlog v 4, server v 5.1.61-log created 121023 9:43:05 BINLOG ' mViGUA9kAAAAZgAAAGoAAAAAAAQANS4xLjYxLWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAUwAEGggAAAAICAgC '/*!*/; # at 106 #121023 9:43:05 server id 100 end_log_pos 156 Rotate to mysqld-relay-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; We're running a master-master replication setup with the problematic server running mysql 5.1.61. The other server which is, for the moment, stable is running 5.1.58. Has anyone got any ideas what the solution is to this and moreover, what might have caused this?

    Read the article

  • File transfer from MP3 player to computer

    - by JP
    I own an old 20GB Creative Zen jukebox (all external screws are gone on it LOL) and I would like to transfer files back to my PC to get them on my new iPod. Problem is, no matter what software I use (WinAMP, Creative Media Source or Windows Media Player), it just stops transferring files on my HD with an error message that says there is no more space on the destination folder. Problem is, there is still 320GB free. I tried lot of things like installing newer driver, latest Zen plug-in for Media Source, latest WinAMP version. Sometimes, it just works and then again, it stops working and I get this non-sense error. Restarting my PC sometime solves the issue by giving me enough time to transfer 10 or 15 more files and then I get the error again and again. Yesterday though I managed to transfer up to 3GB of MP3s on my computer before getting the error. Seems like I'm having a driver issue or a weird behavior from the player and/or the software I'm using. 3 different software can't reproduce the exact same issue by themselves so it must be something related to the driver. I can't find any post of any sort concerning such issue on old forums. Any idea?

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • What are the advantages and disadvantages of the various virtual machine image formats?

    - by Matt
    Xen and Virtualbox etc both support a range of different virtual machine image formats. These are: vmdk, vdi, qcow & qcow2, hdd & vhd. Without any bias toward a particular product, I'm wanting to know what are the advantages and disadvantages of the various formats both from a features perspective, robustness and speed? One piece of info I discovered in a forum post was this: "The major difference is that VDI uses relatively large blocks (1MB) when growing an image, and thus has less overhead for block pointers etc. but isn't ultimately space efficient in the sense that if a single byte is non-zero in such a 1MB block the entire space is used. VMDK in contrast uses 64K blocks, and thus has more management overhead and generally a bit less disk space consumption What offsets this is that VDI is more efficient when it comes to snapshots." You might be thinking, I want to know this because I want to know which format to choose? Not exactly, I'm developing some software which utilises these formats and want to support one or more of them. Simplicity, large disks and ease of development are my main drivers.

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • How can I most efficiently batch resize images on a Mac?

    - by Nick Douglas
    I've been batch-resizing images through Preview (OS X) through the menu bar, but I want a simpler workflow, since I do this a dozen times a day. What I want: 1. Select a group of image files in finder 2. Hit a button or two (menu item or keyboard shortcut) to do the following: a. Scale all the pictures to 600 pixels wide b. Save as JPG files at 75% quality What I also want: - All of the above, plus step a(1): Crop images to 200 pixel height I can do all that manually, to a batch of files, through Preview. I can do it one at a time with some keyboard shortcuts in Photoshop or Pixelmator. Automator (using Preview) can scale to 600 pixels on the longest dimension, but it doesn't let me specify width. (It can scale specifically to width before cropping height.) It can change to JPG, but it can't specify image quality. And I can assign a keyboard shortcut to the whole process. Is that my best option on a Mac? Can I accomplish this more efficiently through another app like Quicksilver?

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

  • Installed Paragon HFS+ for Windows 8, now my pc won't recognize the external firewire drive

    - by Steve
    I'm not incredibly knowledgeable about computers and I really need some help. Just got a Seagate external firewire drive this morning. I downloaded the necessary pc driver (Paragon HFS+ for Windows 8) through their website per the instructions that came with the drive. After installation, I restarted and the pc recognized the firewire drive just fine. About three hours into copying files from my pc to the firewire drive, it gave me an error and told me the files couldn't be copied. When I clicked to get out of the message, the computer crashed. After an hour of it trying to repair itself in safe mode, it restored me to an earlier version before the system crashed. Here's my current dilemma: The Paragon HFS+ is still showing up in my programs as installed, but the Device Manager is not recognizing the drive. When I try to uninstall and reinstall Paragon, it interrupts me with a message saying "The setup must update files or services that cannot be updated while the system is running" and basically gives me the finger. I have no idea what to do now, as it won't let me uninstall and reinstall Paragon, and I have no idea why it crashed my computer in the first place. Is there possibly another Mac - PC firewire driver I can try downloading instead? I really don't know what I'm doing and any help would be greatly appreciated.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >