Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 144/2452 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Compress a folder of PDF files into separate zip archives

    - by Panrubius
    I wanted to take a folder full of PDF files and create a number of separate zip files, after following the advice on this question everything worked *almost*perfectly. Here's what happened: When I issued this command in Terminal: zip -s 5m -r ~/Desktop/invoices ~/Desktop/Invoices/ Everything worked really well, in that I got 11 ZIP files of approximately 5 MB each; placed in the folder specified. However, the files they outputted were named as follows: invoices.z01 invoices.z02 invoices.z03 invoices.z04 invoices.z05 invoices.z06 invoices.z07 invoices.z08 invoices.z09 invoices.z10 invoices.zip So as you can see only invoices.zip has been named correctly. I could go through and rename them one by one, but seriously, if we start doing that then what in the name of Evolution are computers for?! Now, I am also aware that I'm relatively new to the Terminal; so I could be making a very silly mistake somewhere. If that's the case, please be patient :-) Any help would be greatly appreciated. One last note: I'm quadriplegic so I would like to avoid GUI applications as much as possible, I use voice recognition software you see this working in the Terminal is much much easier.

    Read the article

  • Write permissions on uploaded files - PHP & Linux

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Permanently deleting files on Mac OS

    - by Jonik
    A while back, as relatively new Mac OS X user, I was surprised to learn that you cannot easily delete files. Directly, that is, without moving them to the trash first. On Windows and Linux this can obviously be done with ease, but not so on the Mac. I noticed this when trying clear up files from a USB memory stick — removing the files ("move to trash") does not free up space; that happens only after emptying the whole system-wide Trash. Not particularly convenient! (It seems stupid to have to empty the whole trashcan just to make some space on the USB stick. There might be gigabytes of stuff in there, and this sort of defeats its purpose - what if you'd actually need to restore something from the trash some day.) So, what's your way of getting around this? Have you bought a 3rd party application like RAW Trash for $16.95 just to delete files, or do you diligently empty the trashcan whenever needed? Or did I miss something? Also, can you convince me that this is actually the way it should be — that users shouldn't be able to fiddle with the filesystem easily? :)

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • ActiveDirectory user files aren't being shared? [on hold]

    - by Ryan
    I'm taking a class in college and my current project is to setup Active Directory. So I have two VMs, one is Windows Server 2008 R2 and the other is Windows 7. I setup the domain team15.net (this is the FQDN for the forest) on the W2008 server machine. And then I connected the Windows 7 machine to the team15.net domain, and now I can login to the administrator account on the W2008 machine by using TEAM15\Administrator for the username. However, any files that I add in Windows 7 while logged into TEAM15\Administrator are NOT shown in the Windows 2008 machine. Is this normal? It seems like any files I add to this user in the domain only exists for the machine that I'm currently using. Is it possible to change this so all files for all users are synced to all computers in the domain? If not then what are alternatives? I noticed that back in high school we also used domains but there was a shared drive E:\ that you had to store your files because if you put them on in the desktop instead then they would suddenly disappear once you logged out and back in. How can I setup a shared drive? and which computer would provide the storage for this drive?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • PHP files are downloaded, not executed in UserDir on Apache

    - by Fabian
    We're running a webserver using Debian 6.0.3 with Apache 2, we recently upgraded from Debian 5 to 6. Since then php scripts in the user directories (using mod_userdir) have stopped working, they are downloaded instead of being executed. There is also a website using php outside of the user directories, and that one continues to work fine, so PHP seems to generally work on the server. I tested it with several PHP files, among the a simple phpinfo file that works fine on the main site, but is just downloaded when copying it to one of the user directories. The php files and the directory containing them are executable for everyone. The option in the Apache php5.conf that by default disables PHP in the user directories, is commented out, so the php5.ini looks like this: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> # To re-enable php in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule> </IfModule> We restarted Apache after changing this. I'm running out of ideas now what the problem could be, and I don't know how I could really determine which problem is preventing those php files from being executed. Any ideas on how I can solve this? Update: Strangely, PHP seems to work fine in subfolders of user directories, so if I copy a PHP file from /home/user/public_html/ to /home/user/public_html/test/ it suddenly works.

    Read the article

  • Windows 7: Moving Program Files location during install using unattend.xml

    - by Shevek
    I am planning on using an unattend.xml to create a Windows 7 Ultimate 64-bit setup with Users and ProgramData on a 2nd drive. I have found many samples of how to do this (see below). However I would also like to move Program Files to a 3rd drive as well. i.e.: C:\Windows [SSD] D:\Users [HDD1] D:\ProgramData [HDD1] P:\Program Files [HDD2] P:\Program Files (x86) [HDD2] I have found that this was possible using unattend.txt in XP but all documentation or examples I find about Win 7 only mention Users and ProgramData, not Program Files. Is this possible using an answer file? Sample unattend.xml for Users and ProgramData: <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Shell-Setup" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64"> <FolderLocations> <ProfilesDirectory>D:\Users</ProfilesDirectory> <ProgramData>D:\ProgramData</ProgramData> </FolderLocations> </component> </settings> </unattend>

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Tortoise SVN, find all non-added files?

    - by John Isaacks
    I am using Tortoise SVN on Windows. I have a deep directory structure (using cakephp). After creating several files I then went through and +added them. The new files show a ? next to them and a + next to them after set to add so it is easy to tell which files still need to be added. this is the same with folders. However if a new file is inside an old folder there will still be the green checkmark. You won't have any indicator telling you there is a new file. So I forgot to add one before a commit. This wouldn't be so bad as I could just add it and recommit. However, I also made this a tag, so I had recommit to the tag and the trunk. Anyways this wouldn't have been an issue if there was an indicator on existing folders that contain new files. Is there a way to set this up?

    Read the article

  • Windows Search not searching in files

    - by Cylindric
    I am trying to get Windows Search to work on my Windows Server 2008 SP2 fileserver, so I can search in files for content. I have added the Windows Search Service role to the server, and using the right-click properties in Explorer set some folders to "Index this location". The problem is that neither on the server or remotely can I search in the files. I seem to get some inconsistencies in the GUIs, for example the "Indexing Options" panel shows me just 6 locations indexed, but if I click "Modify" I see nearly everything ticked. For example, the "SeachTest" folder under "infrastructure" has the "index this location" option ticked, but the "Projects" folder does not. I assume this is why some are grey and some not, but they are all ticked. T The "SearchTest" folder contains some files that have nothing but the text PurpleOrange in it, so I should be able to find those. So, to summarise: Which locations are indexed? The ones in the "Index these locations" list, the ones ticked, or the ones not greyed-out in the list? How do I get to the state where I can click in the search box and type PurpleOrange and see the files?

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Non-restored Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Transferring files from ftp to local system

    - by user1056221
    I want to copy a file from FTP and paste it to my local system. I want to run this through a batch file. I am trying this for a week. But I couldn't find the solution. Anyone help me please.... This is my actual work Want to copy a file named "Friday.bat" from ftp://172.16.3.132 (with username and password) So I use the below coding: @echo off @ftp -i -s:"%~f0"&GOTO:EOF open 172.16.3.132 mmftp ((((pasword entered here))))) binary get Friday.bat pause Result: ftp> @echo off ftp> @ftp -i -s:"%~f0"&GOTO:EOF Invalid command. ftp> open 172.16.3.132 Connected to 172.16.3.132. 220 Welcome to ABL FTP service. User (172.16.3.132:(none)): 331 Please specify the password. 230 Login successful. ftp> binary 200 Switching to Binary mode. ftp> get Friday.bat 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> pause Finally, a file named Friday.bat is copied to my local system with 0 bytes, but it will not open.

    Read the article

  • Transferring files from ftp to local system

    - by Ramkrishnan
    I want to copy a file from FTP and save it to my local system. I want to run this through batch file. I am trying this for a week. But I couldn't find the solution. Anyone help me please.... This is my actual work Want to copy a file named "Friday.bat" from ftp://172.16.3.132 (with username and password) So i use the below codings: @echo off @ftp -i -s:"%~f0"&GOTO:EOF open 172.16.3.132 mmftp ((((pasword entered here))))) binary get Friday.bat pause Result: ftp> @echo off ftp> @ftp -i -s:"%~f0"&GOTO:EOF Invalid command. ftp> open 172.16.3.132 Connected to 172.16.3.132. 220 Welcome to ABL FTP service. User (172.16.3.132:(none)): 331 Please specify the password. 230 Login successful. ftp> binary 200 Switching to Binary mode. ftp> get Friday.bat 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> pause Finally, a file named Friday.bat is copied to my local system with 0 bytes and I am not able to open it

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Memory Usage by Mapped Files (Win7 64Bit)

    - by Dexter
    When copying files from/to my external USB3 HDD memory usage in Win7 goes up to 100% and remains there. I'm not sure whether this is a problem caused by faulty drivers or not, but I already have the current version of them (Etron USB3 controller on a Gigabyte 990fxa board).. Using RAMMap it becomes obvious that the files, that are to be copied, are mapped into memory. Clicking on Empty > Empty System Working Set seems to temporarily fix the problem (without causing any trouble with the file copy process), but it needs to be done every few seconds. Is there any way to schedule this operation to happen ever 10 or so seconds on its own? What underlying system command is RAMMap using? Or, alternatively, is there any way to limit how much RAM mapped files may use in Windows 7? I know mapped files would usually be removed from memory if other programs need the memmory, but while memmory usage is at 100% the system starts freezing up for half a second or so everytime I click anything .. thus the automatic removal of unused memory contents seems to be failing here.

    Read the article

  • Git: Removing carriage returns from source-controlled files

    - by Blixt
    I've got a Git repository that has some files with DOS format (\r\n line endings). I would like to just run the files through dos2unix (which would change all files to UNIX format, with \n line endings), but how badly would this affect history, and is it recommended at all? I assume that the standard is to always use UNIX line endings for source-controlled files, and optionally switch to OS-specific line endings locally?

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >