Search Results

Search found 78796 results on 3152 pages for 'find in files'.

Page 55/3152 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Migrate 3 terabytes of files to a new server windows 2003

    - by smackaysmith
    We have a new file server to handle the obscene amount of files generated by the company (PDFs, XLS, DOCs and JPGs). Files being moved to the new server total about 3tb. The problem is we can't take the company down for days to move the files. The other problem is the applications creating all these files have to reference previous files, so we can't simply point them to the new server. Also, there isn't an option to have the applications create files on the new server, but reference the old server for existing files. The servers are x64 win2003 r2. Both servers are on the same subnet. DFS doesn't work. Is there an application that can handle this amount of data to copy the files over, throttle bandwidth, and do a 'merge'? By merge I mean constantly copying over newly created files until the two servers are synched.

    Read the article

  • Source File not updating Destination Files in Excel

    - by user127105
    I have one source file that holds all my input costs. I then have 30 to 40 destination files (costing sheets) that use links to data in this source file for their various formulae. I was sure when I started this system that any changes I made to the source file, including the insertion of new rows and columns was updated automatically by the destination files, such that the formula always pulled the correct input costs. Now all of a sudden if my destination files are closed and I change the structure of the source file by adding rows - the destination files go haywire? They pick up changes to their linked cells, but don't pick up changes to the source sheet that have shifted their relative positions in the sheet. Do I really need to open all 40 destination files at the same time I alter the source file structure? Further info: all the destination files are protected, and I am working on DropBox.

    Read the article

  • Extracting a .zip file into Program Files (x86)

    - by Evan
    I just got 64 bit Vista system after being on Windows XP. I'm trying to get all my useful programs up to date, and I've recently had a problem extracting files into the 32-bit program files directory (Program Files (x86)). I'm using 7zip to extract the eclipse-SDK-3.5-win32.zip directory into C:\Program Files (x86) Unfortunately, every time I've tried to do this, 7Zip reports can not open output file C:\Program Files (x86)\eclipse\... I've been able to extract it to C:\ and then move it, I'm assuming there's some protection on the Program Files directory that is causing some problems. Any suggestions?

    Read the article

  • Find the model of my motherboard without opening the computer [closed]

    - by Code
    Possible Duplicate: Find out what the motherboard on my computer is I need to find the model of my motherboard so I can find what soundcard/chip it uses so I can get some drivers for it. Is there anyway to get this from inside XP? I looked through device manager but haven't seen anything that would tell me. I built the system over a year ago and don't have any receipts to check what it was.

    Read the article

  • Copy any file with a specific file extension in subfolders into a folder

    - by Onyxius
    I found a script on here that would use 7zip and extract all the files in all the sub-folders of a specific folder and put them in their own folder using the script below. What I need is add to it or maybe use another script if i have to and specify where i want those files to go instead of putting them in their own folder within the folder. I don't know how to do this and hope someone would be able to help. Thanks for the help @echo on FOR /D /r %%F in ("*") DO ( pushd %CD% cd %%F FOR %%X in (*.rar *.zip *.tar) DO ( "C:\Program Files\7-zip\7z.exe" x -o"%%~nX" "%%X" ) popd )

    Read the article

  • Loading Obj Files in Soya3d engine

    - by John Riselvato
    I recently just found soya3d and from what i have seen through the tutorials i will be able to make exactly what i wanted with python skills. Now i have built this map generator. The only issue is that i can not manage to understand from any documents how to load obj files. At first i figured that i had to convert it to a .data file, but i dont understand how to do this. I just want to load a simple model of a house. I tried using the soya_editor, but i can not figure out at all how to do anything with that. Heres my script so far: import sys, os, os.path, soya, soya.sdlconst width, height = 760, 375 soya.init("Generator 0.1", width, height) soya.path.append(os.path.join(os.path.dirname(sys.argv[0]), "data")) scene = soya.World() model = soya.model.get("house") light = soya.Light(scene) light.set_xyz(0.5, 0.0, 2.0) camera = soya.Camera(scene) camera.z = 2.0 soya.set_root_widget(camera) soya.MainLoop(scene).main_loop() house is in .obj form at folder data/models The error i get is: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit... So i am figuring that because i dont understand how to turn my files into .data files, i will need to learn that. So my question is, how do i use my own models?

    Read the article

  • XAMPP - Unable to serve files larger than ~30MB [on hold]

    - by Sparx401
    I'm developing a site locally with XAMPP on Windows 7, and as far as media is concerned, I'm unable to play media files that are larger than 30MB or so. Both video and audio files (MP4 and MP3 respectively) generate this error in Chrome (and show similar errors in other browsers such as IE9 and Opera): No data received Unable to load the webpage because the server sent no data. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. It seems that the exact number of MB somewhat varies between browsers though. One video in question is 34MB and actually plays in Opera and IE9, but gives the aforementioned error in Chrome. I've checked to make sure the file paths were typed correctly and ensured that the directive for .htaccess is there to serve MP4s: AddType video/mp4 mp4 Also, I have these directives set as well in the same .htaccess file: php_value upload_max_filesize "80M" php_value post_max_size "80M" php_value max_input_time 60 php_value max_execution_time 60 And memory_limit is set in php.ini as "128M" so I'm left wondering: what is causing my files to not play, and what, if any, directives I have to change on the server-side? Perhaps something to do with limitations with the GET method (the method I'm seeing on Chrome's network tab among other header request/response info)?

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • Handling Indirection and keeping layers of method calls, objects, and even xml files straight

    - by Cervo
    How do you keep everything straight as you trace deeply into a piece of software through multiple method calls, object constructors, object factories, and even spring wiring. I find that 4 or 5 method calls are easy to keep in my head, but once you are going to 8 or 9 calls deep it gets hard to keep track of everything. Are there strategies for keeping everything straight? In particular, I might be looking for how to do task x, but then as I trace down (or up) I lose track of that goal, or I find multiple layers need changes, but then I lose track of which changes as I trace all the way down. Or I have tentative plans that I find out are not valid but then during the tracing I forget that the plan is invalid and try to consider the same plan all over again killing time.... Is there software that might be able to help out? grep and even eclipse can help me to do the actual tracing from a call to the definition but I'm more worried about keeping track of everything including the de-facto plan for what has to change (which might vary as you go down/up and realize the prior plan was poor). In the past I have dealt with a few big methods that you trace and pretty much can figure out what is going on within a few calls. But now there are dozens of really tiny methods, many just a single call to another method/constructor and it is hard to keep track of them all.

    Read the article

  • text extraction from video game dialogue files [on hold]

    - by wdwvt1
    As part of an academic project, I am trying to access the dialogue files (whether audio or text) from a variety of sports video games (Madden or NBA 2kX would be fantastic). I have searched extensively on other sites (scholarly text-mining publications, r/gaming, r/madden, modding sites, etc.) for guidance in how to extract dialogue files, but have been unsuccessful. Given that I don't have even the domain specific language to ask the right question (i.e. the resources I am seeking are out there, I just can't find them) I am asking the SE game dev community for help with the 2 following questions: Is there a canonical resource that I should study that would get me started with how to extract text or audio files from games? I am very fluent in python, which usually excels at mining information from sources, but I struggle with knowing where to start with a video game (as opposed to a more familiar database with a defined API). Is this even feasible, or are protections included with newer games (e.g. NBA 2k13) going to make extraction of these resources in a programmatic way impossible? Thank you for your help!

    Read the article

  • Include Binary Files in DEB package

    - by user22611
    I need to build a DEB package from mainly Node.js Javascript files, but it should include some binary files as well. They are listed inside debian/source/include-binaries. Otherwise I get the error message dpkg-source: error: unrepresentable changes to source The command in question is: bzr builddeb -- -us -uc After adding the file include-binaries, when running bzr builddeb -- -us -uc again, now I get a different error: It says dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/mailadmin_0.0-1.diff.n6m5_6 I have no idea how to get rid of this. In the next line of output it tells me dpkg-source: info: you can integrate the local changes with dpkg-source --commit But if I run this command in the build area of my package, it gives me the unrepresentable changes to source error message again, even though debian/source/include-binaries is present in the build area as well. I am missing the way out of this... I tried deleting all files that are produced by the build process, still no success. Further details: The target directory is /opt/mailadmin. Since this directory is unusual, I listed it in the file debian/mailadmin.install (which contains one line:) opt/mailadmin opt/ The bzr builddeb process uses this file as expected.

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • Adding files and folders to a Root Folder (inode/directory)

    - by xBaldwin
    Ok so I'm fairly new to Ubuntu and wasn't even the one who put it one this computer(my friend did while I was storing it at his house because I was in the middle of transitioning between houses), but It's on here so I need to learn what I can so I can use it more effectively. My question at the moment is "Would it be safe to add files/folders to a folder (inode/directory) that requires Root access?" I continue to be informed by the system that the directory I am using is running low on space which I found odd seeing how I should have a lot more room on this computer. That's when I started looking at the directories and found that there are two with a bunch of un-used space on them. One says it has 46.9 GB of free space and the other has 24.9 GB of free space. Seems like a complete waste to not use that space and yet they both say they require Root access to add to them. I know that Root folders and files are normally all system folders and files. I also know that changing or deleting them can mess up the computer which right now I cant afford to do. I just don't know if it would mess anything up to add something to those folders. Thank you in advance to anyone who takes the time to reply and try to teach me about how all that works. I really do appreciate it and will do the same if by some crazy (completely unlikely) reason I have an answer to your question. :-)`

    Read the article

  • All files gone after running fsck. How can I recover my files?

    - by cinlung
    I am a newbie in Linux. So this is my story I installed Ubuntu server 10.04lts. It worked great for many months, until today i decided to run fsck on the system partition and although it warned me, I kept pressing yes and now it will only boot into grub prompt. So i read some article and tried grub reinstall. But before performing grub reinstall, i decided to run fsck again from Ubuntu 10.04 lts for desktop live CD. The fsck painfully passes, now my drive is recognized as ext4 system and I am able to mount it again. However, all i can see is just boot directory and lost&found. I tried to perform grub reinstalling by doing grup-install stuff, now my grub is still not loading right, my files are missing, and the weird thing is that the amount I found used by boot and lost n found is only 5gb and the amount used in he hdd is 8 gb. So my files must be somewhere in the hdd. Is there any sinple way maybe a windows tool or something yo recover my files? I only need to retrieve my database backup and everything else can go. I am freaking out here. Please help.

    Read the article

  • Do you keep intermediate files under version control?

    - by Subb
    Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are not the result of some process. The jpg and swc are what I would call "intermediate" files. They are the product of one (or more) source file(s). The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?

    Read the article

  • Ubuntu One, compressed files

    - by user8179
    I have uploaded some files to my Ubuntu One account and it seems to work great most of the time. I usually upload them directly from Nautilus by right clicking the folder, using the ”Synchronize this folder” option, and then I make sure that the file I want to upload is published. Then I usually test the whole thing by trying to download it. I right click the file again to get its URL and I paste it into my Web browser. This usually works fine. But yesterday I uploaded two compressed files – ”.tar.bz2”. When I tried to open them after downloading them with my Web browser (Opera), it failed. I found that the file was bigger than the original file (2358 B instead af 2335 B – 15 B added at the beginning of the file and 8 B added to the end), and someone at the Opera channel (IRC) at OperaNet (Europe) figured out that the reason for this is that the server compress the file again, ”without telling Opera”. So to be able to extract the file I need to add ”.gz” to the file name and then extract it twice. If I downloaded it with Firefox however, I didn't need to do that, so maybe Firefox figured this out somehow in a way that Opera does not. Someone also tried to download the file with wget and some other browser and he also got the same result as I did with Opera, that is the file is compressed a second time by the server. I guess ”the server” is the Ubuntu One server, right? So why is this? Could it be done better somehow? Or did I do something wrong when uploading the files? It also seems like this extra compressing thing does not always happen, because when I tried again a few minutes ago, the file came down with its right size (2335 B), without an extra compression. But the other file (114 MiB) was still compressed twice.

    Read the article

  • Unix [Homework]: Get a list of /home/user/ directories in /etc/passwd

    - by KChaloux
    I'm very new to Unix, and currently taking a class learning the basics of the system and its commands. I'm looking for a single command line to list off all of the user home directories in alphabetical order from the /etc/passwd directory. This applies only to the home directories, and not the contents within them. There should be no duplicate entries. I've tried many permutations of commands such as the following: sort -d | find /etc/passwd /home/* -type -d | uniq | less I've tried using -path, -name, removing -type, using -prune, and changing the search pattern to things like /home/*/$, but haven't gotten good results once. At best I can get a list of my own directory (complete with every directory inside it, which is bad), and the directories of the other students on the server (without the contained directories, which is good). I just can't get it to display the /home/user directories and nothing else for my own account. Many thanks in advance.

    Read the article

  • Finding and changing currencies using Greasemonkey

    - by Noam Smadja
    It doesnt find nor replaces the strings.. may be wrong regex? // ==UserScript== // @name CurConvertor // @namespace CurConvertor // @description noam smadja // @include http://www.zavvi.com/* // ==/UserScript== textNodes = document.evaluate( "//text()", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); var searchRE = new RegExp('\d\d.\d\d','gi'); var replace = searchRE*5.67; for (var i=0;i<textNodes.snapshotLength;i++) { var node = textNodes.snapshotItem(i); node.data = node.data.replace(searchRE, replace); i wrote this, but its not doing a think. even when i change the string in the regex to a string in the webpage it still does nothing.. what am i missing? :)

    Read the article

  • How can I make named_scope in Rails return one value instead of an array?

    - by sameera207
    I want to write a [named scope] to get a record from its id. For example, I have a model called Event, and I want to simulate Event.find(id) with use of named_scope for future flexibility. I used this code in my model: named_scope :from_id, lambda { |id| {:conditions => ['id= ?', id] } } and I call it from my controller like Event.from_id(id). But my problem is that it returns an array of Event objects instead of just one object. Thus if I want to get event name, I have to write event = Event.from_id(id) event[0].name while what I want is event = Event.from_id(id) event.name Am I doing something wrong here?

    Read the article

  • Help with hash tables and quadratic probing in Java

    - by user313458
    I really need help with inserting into a hash table. I'm just not totally getting it right now. Could someone explain quadratic and linear probing in layman's terms? public void insert(String key) { int homeLocation = 0; int location = 0; int count = 0; if (find(key).getLocation() == -1) // make sure key is not already in the table { //****** ADD YOUR CODE HERE FOR QUADRATIC PROBING ******** } } This is the code I'm working on. I'm not asking anyone to do it, I just really need help with learning the whole concept Any help would be greatly appreciated.

    Read the article

  • Set a script to automatically detect character encoding in a plain-text-file in Python?

    - by Haidon
    I've set up a script that basically does a large-scale find-and-replace on a plain text document. At the moment it works fine with ASCII, UTF-8, and UTF-16 (and possibly others, but I've only tested these three) encoded documents so long as the encoding is specified inside the script (the example code below specifies UTF-16). Is there a way to make the script automatically detect which of these character encodings is being used in the input file and automatically set the character encoding of the output file the same as the encoding used on the input file? findreplace = [ ('term1', 'term2'), ] inF = open(infile,'rb') s=unicode(inF.read(),'utf-16') inF.close() for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext outF = open(outFile,'wb') outF.write(outtext.encode('utf-16')) outF.close() Thanks!

    Read the article

  • [C#] Finding the index of a queue that holds a member of a containing object for a given value

    - by Luke Mcneice
    I have a Queue that contains a collection of objects, one of these objects is a class called GlobalMarker that has a member called GlobalIndex. What I want to be able to do is find the index of the queue where the GlobalIndex contains a given value (this will always be unique). Simply using the .contains function shown bellow returns a bool. How can I obtain the queue index of this match? RealTimeBuffer.OfType<GlobalMarker>().Select(o => o.GlobalIndex).Contains(INT_VALUE);

    Read the article

  • $where in mongodb web shell not working

    - by Bravo
    i have the below set of test documents which i inserted in to the mongodb and when i use to query the db using the $where get the below exception Error: database error: $where query, but no script engine Any idea why the $where clause not working test data : db.things.save({ "_id" : 1, "domainName" : "test11.com", "hosting" : "hostgator.com" }) db.things.save({ "_id" : 2, "domainName" : "test2.com", "hosting" : "aws.amazon.com"}) db.things.save({ "_id" : 3, "domainName" : "test3.com", "hosting" : "aws.amazon.com" }) db.things.save({ "_id" : 4, "domainName" : "test4.com", "hosting" : "hostgator.com" }) db.things.save({ "_id" : 5, "domainName" : "test5.com", "hosting" : "aws.amazon.com" }) db.things.save({ "_id" : 6, "domainName" : "test6.com", "hosting" : "cloud.google.com" }) db.things.save({ "_id" : 7, "domainName" : "test7.com", "hosting" : "aws.amazon.com" }) db.things.save({ "_id" : 8, "domainName" : "test8.com", "hosting" : "hostgator.com" }) db.things.save({ "_id" : 9, "domainName" : "test9.com", "hosting" : "cloud.google.com" }) db.things.save({ "_id" : 10, "domainName" : "test10.com", "hosting" : "godaddy.com" }) query used : db.things.find( { $where: "this.domainName == 'test11.com'" } );

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >