Search Results

Search found 40229 results on 1610 pages for 'deleted files'.

Page 28/1610 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • where can i ask questions where my post will not be deleted

    - by user287745
    http://stackoverflow.com/questions/3028161/project-help-needed-some-basic-concepts-great-confusion-because-of-lack-of-prope where can i ask questions where my post will not be deleted because of "it difficult to say what is being asked" i mean general waste area covering questions like the one i asked in.... and please gve links to help forums where there are experts like you before closing this question thank you

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Virus that makes all files and folders read-only filesystem on a usb drive

    - by ren florento
    Is there any way on how to remove a virus from Windows that makes the files and folders and the usb drive itself a read-only filesystem as this is an annoying one because the virus keeps copying itself as long as it sees a folder and keeps running which prevents you from creating and deleting files and folders from the usb drive and makes " mount -o remount,rw '/path' " ineffective ? btw i'm not really sure if it is a virus but what makes me think that it is a virus is for the reason the it creates a .exe file within every folder which was named after folder and it also immediately reverts to read-only filesystem which locks the files and folders even after executing the command " mount -o remount,rw '/path' ". i also think the virus is just running only within the usb drive as it is not affecting the folders on ubuntu. I could choose to reformat the usb drive as it only contains few important files but what concerns me is if such virus or whatever you may call it gets into my backup drives that contains many important files.Thanks for any help and advice you could give.

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • I cannot rename files in bulk using ubuntu's rename feature

    - by user254174
    I cannot rename files in bulk using ubuntu's rename feature. The files are on a NTFS partition. I want to rename files that look like this: whatever pic george.jpg tacoma narrows bridge.jpg green bottle.jpg to: filename (1) filename (2) filename (3) And I cannot do this at all. I don't want to use the command line either. So I can permanently erase files after I have encrypted them without exposing their contents to people who use a file recovery tool. I also don't want a method that takes days or months to rename the file. That is, rename one file at a time. So if I have hundreds of files to rename, this won't be a option. I want to give a each file the same name and numbered in order like shown above. Pyrenamer is not an option for me, unless you can find how to do that in PyRenamer.

    Read the article

  • Change permission for ALL folders and files

    - by Xweque
    I've been around Ubuntu for not too long now and I'm getting tired of a thing I used to accept. When I installed Apache and PHP on Ubuntu it was done with root meaning it got permission. So I changed that to me. Now I've just copied a big number of files, (PHP), to be viewed and edited in these directories. Now my problem: I can not view the files from var/www/ because it requires, for some reason, everyone to have access to the files. Not only me, or my group but everyone. No one else is using the computer but me, so I'm cool with it. Though I need a command to change ALL files permission recursively. When I've browsed the questions already been answered I find for example chown -R viktor:viktor /var/www/, or using sudo as well. This worked on the single var/www and the folders inside but not the files inside the folders and very odd I notice I can't do the same thing on example /var/www/dev/.

    Read the article

  • Ubuntu One downloads already existing files

    - by Islam Hassan
    I've uploaded some files to Ubuntu One from my home laptop and begin to download it on my work laptop. Then I've got a USB and copied these files directly through the USB driver. My problem now is that Ubuntu One still downloading these files although I've copied them to Ubuntu One folder. I need it to consider the already existing files as synced and don't download it again. And I need Ubuntu One for further use so I can't simply quit it. How could I mark the already existing files as synced ?

    Read the article

  • Rhythmbox won't import or play flac files

    - by Dan Drake
    I have a new installation of 12.04 and I just copied over all my music to the ~/Music folder. Rhythmbox found all the mp3 and ogg files, but it refuses to import flac files. They simply do not appear in my music library. If I start Rhythmbox on the command line and try to import a folder that contains flac files, absolutely nothing happens. Nothing is imported; no error messages. I have all the dependencies for Rhythmbox installed, along with all the suggested and recommended packages. I can play a flac file with gst-launch-0.10 and gst-typefind-0.10 correctly identifies flac files as audo/x-flac. Why does Rhythmbox refuse to see flac files? What can I do to find out what is happening?

    Read the article

  • How To Delete, Move, or Rename Locked Files in Windows

    - by Chris Hoffman
    Windows won’t allow you to modify files that open programs have locked. if you try to delete a file and see a message that it’s open in a program, you’ll have to unlock the file (or close the program). In some cases, it may not be clear which program has locked a file – or a background process may have locked a file and not terminated correctly. You must unlock the stubborn file or folder to modify it. Note: Unlocking certain files and deleting them may cause problems with open programs. Don’t unlock and delete files that should remain locked, including Windows system files. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Copy only folders not files?

    - by Shannon
    Is there a way to copy an entire directory, but only the folders? I have a corrupt file somewhere in my directory which is causing my hard disks to fail. So instead of copying the corrupt file to another hard disk, I wanted to just copy the folders, because I have scripts that search for hundreds of folders, and I don't want to have to manually create them all. I did search the cp manual, but couldn't see anything (I may have missed it) Say I have this structure on my failed HDD: dir1 files dir2 files files dir4 dir3 files All I a want is the directory structure, not any files at all. So I'd end up with on the new HDD: dir1 dir2 dir4 dir3 Hoping someone knows some tricks!

    Read the article

  • Mounting NFS directory causes creating Zero byte files

    - by Alaa
    I have two Servers, Server X (IP 192.168.1.1) and Server Y (IP 192.168.1.2), both of them are ubuntu 9.1 i have created varnish load balancer on them for my drupal website (pressflow 6.22) I have mounted a directory of imagecache from server X to Y as below @X:/etc/exports == /var/www/proj/htdocs/sites/default/files/images 192.168.1.2(rw,async,no_subtree_check) @Y:/etc/fstab == 192.168.1.1:/var/www/proj/htdocs/sites/default/files/images var/www/proj/htdocs/sites/default/files/images nfs defaults 0 0 also i made this on server X X:/var/www/proj/htdocs/sites/default/files$ chmod -R 777 images i tried to touch, rm, vim, and cat files in images directory that has been mounted on Y and everything went fine. now, ALWAYS when server Y's imagecache tries to create an image in images directory, the image is created with ZERO byte file size. anyone face the same before? any idea of how to fix this problem or what might cause it? Thanks for your help

    Read the article

  • RabbitVCS displaying unchanged files on commit

    - by misterjinx
    I have a strange problem with RabbitVCS. I'm inside a working copy directory and I want to commit some files. When I click the commit button, the commit window shows up, but there is a strange situation. Even though I have modified just a few files, the commit window is displaying all the files and directories inside working copy and the checkbox is ticked for each of them, like those files need to be committed. But those files were not changed and already exist in the repo. Please see the image below to understand what I'm saying (the only file that is unversioned/was changed is .htaccess, therefore it should have been the only file listed there). Has this happened to anyone ? It is a bug with RabbitVCS (and probably a solution exists) or am I doing something wrong ?

    Read the article

  • .bash_history and .cache

    - by John Isaacks
    I have a user who's home directory is a Mercurial repository. Mercurial notified me that there were 2 new unversioned files in my repository. .bash_history and .cache/motd.legal-displayed. I assume bash_history is the history of bash commands for my user. I have no idea what the other is. I don't want these files to be versioned by Mercurial, are they safe to just delete, or will they come back, or mess something up? Can they be moved to somewhere else? Or do I have to add them to my .hgignore file?

    Read the article

  • How To Extract Individual Files From a Windows 7 System Image Backup

    - by Chris Hoffman
    Windows 7’s backup control panel has the ability to create full system image backups. While Windows says you can’t restore individual files from these backups, there’s a way to browse the contents of a system image and extract individual files. System image backups are meant for restoring an entire system. If you want to easily restore individual files, you should use another type of backup – but you don’t have to restore an entire system image to get a few important files back. HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Other people's files showing up in rhythmbox

    - by Avery Boyer
    I have my computer connected to a college network, and right now files that belong to other individuals on campus are showing up under Shared in rhythmbox. This is driving me up the wall, I absolutely despise the idea that files are being thrown around on the network and that other people's s*** is showing up on my computer, and that they may be able to see my files as well. This is a very, very serious problem as far as I am concerned and I want to know how I can ensure that I am sharing nothing with the network in the way of files on my computer and that no one else's files are showing up on my computer.

    Read the article

  • React to a modified directory

    - by Ghanshyam Rathod
    In linux everything is considered as file, Now if I want to find only folders/directories not the files then how can i do that? I am getting all the modified files with the following command. find /Users/ghanshyam -type f -mmin -5 -print My goal is to generate the log file with all the modified/access folders. Here two options are available. create a module and call every time when a folder is modified (this one is bit difficult because I need to check particular event) create a cron task that will run after every 5 minutes. cron task will execute shell script and generate the log entries with the modified folders. Do you have any other option to do this task ?

    Read the article

  • How do I know which file a program is trying to access?

    - by user9069
    I have a program which I am trying to run, however when I run it; it just complains that it can't find a particular file. However I have no idea which folder it is trying to find this particular file in. I have a copy of the required file, I just need to know which folder to copy it too. Is there any way to show in real time which files are being accessed or which files are trying and failing to be accessed? I am using Ext4 filesystem if that helps. Thanks

    Read the article

  • Make Apache encode or replace quotes instead of escaping them?

    - by mplungjan
    In the dcoumentation I read Format Notes For security reasons, starting with version 2.0.46, non-printable and other special characters in %r, %i and %o are escaped using \xhh sequences, where hh stands for the hexadecimal representation of the raw byte. Exceptions from this rule are " and \, which are escaped by prepending a backslash, and all whitespace characters, which are written in their C-style notation (\n, \t, etc). In versions prior to 2.0.46, no escaping was performed on these strings so you had to be quite careful when dealing with raw log files. This is a problem for Analog which is still the handiest analyser I use. I get .... "GET /somerequest?q=\"quoted string\"&someparm=bla" in the logfile and it is of course flagged as corrupt since Analog expects .... "GET /somerequest?q=%22quoted string%22&someparm=bla" or similar. I realise I can pre-process using something like perl -p -i.bak -e 's/\\"/%22/g' logfile But I'd rather not have to add this step to these files which are 50-90MB zipped per day Thanks for any pointers

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • How file permissions are stored in inode?

    - by Debadyuti Maiti
    Suppose there's two pc - "A" and "B". Then if A downloads a files from B , then what would be the file permission of that downloaded file? Is it possible that the downloaded file in A will have an Inode entry with all it's permissions from B & store B's user account as the owner ? If that's the case then is it impossible to change that files permission on A if "others" [as in user-group-others] doesn't have the right to write on that file? e.g. if this is the case , __x __x __x file.txt [On B] then what would be the file permission on A of that same file downloaded from B [e.g. through vsftpd]? __x __x __x file.txt [On A] or rw_x rw_x rw_x file.txt [On A] [i.e. defined by A's default umask value]

    Read the article

  • How to add a daemon to a quickly project

    - by darkrex1986
    Currently I'm developing an application with quickly which is divided in two parts: A graphical UI where the user could configure some things, and a daemon which do the most work in the background. I started with the UI, to create some windows for settings and so on. Now I want to start coding the daemon, but I have no clue how to implement the daemon in my quickly project. Could I simply paste the files of the daemon in project folder or is there an implementation method for adding new files to a quickly project? Or do I have to create a new project and merge them together?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >