Search Results

Search found 40294 results on 1612 pages for 'dot files'.

Page 24/1612 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Virus that makes all files and folders read-only filesystem on a usb drive

    - by ren florento
    Is there any way on how to remove a virus from Windows that makes the files and folders and the usb drive itself a read-only filesystem as this is an annoying one because the virus keeps copying itself as long as it sees a folder and keeps running which prevents you from creating and deleting files and folders from the usb drive and makes " mount -o remount,rw '/path' " ineffective ? btw i'm not really sure if it is a virus but what makes me think that it is a virus is for the reason the it creates a .exe file within every folder which was named after folder and it also immediately reverts to read-only filesystem which locks the files and folders even after executing the command " mount -o remount,rw '/path' ". i also think the virus is just running only within the usb drive as it is not affecting the folders on ubuntu. I could choose to reformat the usb drive as it only contains few important files but what concerns me is if such virus or whatever you may call it gets into my backup drives that contains many important files.Thanks for any help and advice you could give.

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • I cannot rename files in bulk using ubuntu's rename feature

    - by user254174
    I cannot rename files in bulk using ubuntu's rename feature. The files are on a NTFS partition. I want to rename files that look like this: whatever pic george.jpg tacoma narrows bridge.jpg green bottle.jpg to: filename (1) filename (2) filename (3) And I cannot do this at all. I don't want to use the command line either. So I can permanently erase files after I have encrypted them without exposing their contents to people who use a file recovery tool. I also don't want a method that takes days or months to rename the file. That is, rename one file at a time. So if I have hundreds of files to rename, this won't be a option. I want to give a each file the same name and numbered in order like shown above. Pyrenamer is not an option for me, unless you can find how to do that in PyRenamer.

    Read the article

  • Change permission for ALL folders and files

    - by Xweque
    I've been around Ubuntu for not too long now and I'm getting tired of a thing I used to accept. When I installed Apache and PHP on Ubuntu it was done with root meaning it got permission. So I changed that to me. Now I've just copied a big number of files, (PHP), to be viewed and edited in these directories. Now my problem: I can not view the files from var/www/ because it requires, for some reason, everyone to have access to the files. Not only me, or my group but everyone. No one else is using the computer but me, so I'm cool with it. Though I need a command to change ALL files permission recursively. When I've browsed the questions already been answered I find for example chown -R viktor:viktor /var/www/, or using sudo as well. This worked on the single var/www and the folders inside but not the files inside the folders and very odd I notice I can't do the same thing on example /var/www/dev/.

    Read the article

  • Ubuntu One downloads already existing files

    - by Islam Hassan
    I've uploaded some files to Ubuntu One from my home laptop and begin to download it on my work laptop. Then I've got a USB and copied these files directly through the USB driver. My problem now is that Ubuntu One still downloading these files although I've copied them to Ubuntu One folder. I need it to consider the already existing files as synced and don't download it again. And I need Ubuntu One for further use so I can't simply quit it. How could I mark the already existing files as synced ?

    Read the article

  • Rhythmbox won't import or play flac files

    - by Dan Drake
    I have a new installation of 12.04 and I just copied over all my music to the ~/Music folder. Rhythmbox found all the mp3 and ogg files, but it refuses to import flac files. They simply do not appear in my music library. If I start Rhythmbox on the command line and try to import a folder that contains flac files, absolutely nothing happens. Nothing is imported; no error messages. I have all the dependencies for Rhythmbox installed, along with all the suggested and recommended packages. I can play a flac file with gst-launch-0.10 and gst-typefind-0.10 correctly identifies flac files as audo/x-flac. Why does Rhythmbox refuse to see flac files? What can I do to find out what is happening?

    Read the article

  • How To Delete, Move, or Rename Locked Files in Windows

    - by Chris Hoffman
    Windows won’t allow you to modify files that open programs have locked. if you try to delete a file and see a message that it’s open in a program, you’ll have to unlock the file (or close the program). In some cases, it may not be clear which program has locked a file – or a background process may have locked a file and not terminated correctly. You must unlock the stubborn file or folder to modify it. Note: Unlocking certain files and deleting them may cause problems with open programs. Don’t unlock and delete files that should remain locked, including Windows system files. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Copy only folders not files?

    - by Shannon
    Is there a way to copy an entire directory, but only the folders? I have a corrupt file somewhere in my directory which is causing my hard disks to fail. So instead of copying the corrupt file to another hard disk, I wanted to just copy the folders, because I have scripts that search for hundreds of folders, and I don't want to have to manually create them all. I did search the cp manual, but couldn't see anything (I may have missed it) Say I have this structure on my failed HDD: dir1 files dir2 files files dir4 dir3 files All I a want is the directory structure, not any files at all. So I'd end up with on the new HDD: dir1 dir2 dir4 dir3 Hoping someone knows some tricks!

    Read the article

  • Mounting NFS directory causes creating Zero byte files

    - by Alaa
    I have two Servers, Server X (IP 192.168.1.1) and Server Y (IP 192.168.1.2), both of them are ubuntu 9.1 i have created varnish load balancer on them for my drupal website (pressflow 6.22) I have mounted a directory of imagecache from server X to Y as below @X:/etc/exports == /var/www/proj/htdocs/sites/default/files/images 192.168.1.2(rw,async,no_subtree_check) @Y:/etc/fstab == 192.168.1.1:/var/www/proj/htdocs/sites/default/files/images var/www/proj/htdocs/sites/default/files/images nfs defaults 0 0 also i made this on server X X:/var/www/proj/htdocs/sites/default/files$ chmod -R 777 images i tried to touch, rm, vim, and cat files in images directory that has been mounted on Y and everything went fine. now, ALWAYS when server Y's imagecache tries to create an image in images directory, the image is created with ZERO byte file size. anyone face the same before? any idea of how to fix this problem or what might cause it? Thanks for your help

    Read the article

  • RabbitVCS displaying unchanged files on commit

    - by misterjinx
    I have a strange problem with RabbitVCS. I'm inside a working copy directory and I want to commit some files. When I click the commit button, the commit window shows up, but there is a strange situation. Even though I have modified just a few files, the commit window is displaying all the files and directories inside working copy and the checkbox is ticked for each of them, like those files need to be committed. But those files were not changed and already exist in the repo. Please see the image below to understand what I'm saying (the only file that is unversioned/was changed is .htaccess, therefore it should have been the only file listed there). Has this happened to anyone ? It is a bug with RabbitVCS (and probably a solution exists) or am I doing something wrong ?

    Read the article

  • .bash_history and .cache

    - by John Isaacks
    I have a user who's home directory is a Mercurial repository. Mercurial notified me that there were 2 new unversioned files in my repository. .bash_history and .cache/motd.legal-displayed. I assume bash_history is the history of bash commands for my user. I have no idea what the other is. I don't want these files to be versioned by Mercurial, are they safe to just delete, or will they come back, or mess something up? Can they be moved to somewhere else? Or do I have to add them to my .hgignore file?

    Read the article

  • How To Extract Individual Files From a Windows 7 System Image Backup

    - by Chris Hoffman
    Windows 7’s backup control panel has the ability to create full system image backups. While Windows says you can’t restore individual files from these backups, there’s a way to browse the contents of a system image and extract individual files. System image backups are meant for restoring an entire system. If you want to easily restore individual files, you should use another type of backup – but you don’t have to restore an entire system image to get a few important files back. HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Other people's files showing up in rhythmbox

    - by Avery Boyer
    I have my computer connected to a college network, and right now files that belong to other individuals on campus are showing up under Shared in rhythmbox. This is driving me up the wall, I absolutely despise the idea that files are being thrown around on the network and that other people's s*** is showing up on my computer, and that they may be able to see my files as well. This is a very, very serious problem as far as I am concerned and I want to know how I can ensure that I am sharing nothing with the network in the way of files on my computer and that no one else's files are showing up on my computer.

    Read the article

  • React to a modified directory

    - by Ghanshyam Rathod
    In linux everything is considered as file, Now if I want to find only folders/directories not the files then how can i do that? I am getting all the modified files with the following command. find /Users/ghanshyam -type f -mmin -5 -print My goal is to generate the log file with all the modified/access folders. Here two options are available. create a module and call every time when a folder is modified (this one is bit difficult because I need to check particular event) create a cron task that will run after every 5 minutes. cron task will execute shell script and generate the log entries with the modified folders. Do you have any other option to do this task ?

    Read the article

  • Make Apache encode or replace quotes instead of escaping them?

    - by mplungjan
    In the dcoumentation I read Format Notes For security reasons, starting with version 2.0.46, non-printable and other special characters in %r, %i and %o are escaped using \xhh sequences, where hh stands for the hexadecimal representation of the raw byte. Exceptions from this rule are " and \, which are escaped by prepending a backslash, and all whitespace characters, which are written in their C-style notation (\n, \t, etc). In versions prior to 2.0.46, no escaping was performed on these strings so you had to be quite careful when dealing with raw log files. This is a problem for Analog which is still the handiest analyser I use. I get .... "GET /somerequest?q=\"quoted string\"&someparm=bla" in the logfile and it is of course flagged as corrupt since Analog expects .... "GET /somerequest?q=%22quoted string%22&someparm=bla" or similar. I realise I can pre-process using something like perl -p -i.bak -e 's/\\"/%22/g' logfile But I'd rather not have to add this step to these files which are 50-90MB zipped per day Thanks for any pointers

    Read the article

  • How do I know which file a program is trying to access?

    - by user9069
    I have a program which I am trying to run, however when I run it; it just complains that it can't find a particular file. However I have no idea which folder it is trying to find this particular file in. I have a copy of the required file, I just need to know which folder to copy it too. Is there any way to show in real time which files are being accessed or which files are trying and failing to be accessed? I am using Ext4 filesystem if that helps. Thanks

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • How file permissions are stored in inode?

    - by Debadyuti Maiti
    Suppose there's two pc - "A" and "B". Then if A downloads a files from B , then what would be the file permission of that downloaded file? Is it possible that the downloaded file in A will have an Inode entry with all it's permissions from B & store B's user account as the owner ? If that's the case then is it impossible to change that files permission on A if "others" [as in user-group-others] doesn't have the right to write on that file? e.g. if this is the case , __x __x __x file.txt [On B] then what would be the file permission on A of that same file downloaded from B [e.g. through vsftpd]? __x __x __x file.txt [On A] or rw_x rw_x rw_x file.txt [On A] [i.e. defined by A's default umask value]

    Read the article

  • How to add a daemon to a quickly project

    - by darkrex1986
    Currently I'm developing an application with quickly which is divided in two parts: A graphical UI where the user could configure some things, and a daemon which do the most work in the background. I started with the UI, to create some windows for settings and so on. Now I want to start coding the daemon, but I have no clue how to implement the daemon in my quickly project. Could I simply paste the files of the daemon in project folder or is there an implementation method for adding new files to a quickly project? Or do I have to create a new project and merge them together?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >