Search Results

Search found 40435 results on 1618 pages for 'organize files and folders'.

Page 41/1618 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Creating a patch file from a diff of 2 folders

    - by karatchov
    I made some changes to an open source project without taking time to create proper patch files. Now, the maintainer of the project released a new version, and among new and edited files, there are a bunch of renamed files. Whats is the best way to apply my changes to the new version ? I'm completely new to diff/patch use, and If I can get it done with git, it would be better.

    Read the article

  • How to organize RMI Client-Server eBanking architecture

    - by xenom
    I am developing a secured eBanking service in RMI with a GUI both for Server and Client. The Server must be able to log every operations (new User, deleted User, Withdrawal, Lodgement...) The Client will do these operations. As everything is secured, the Client must at first, create an account with a name and a password in the GUI. After that, the GUI adds the User in the Bank UserList(arrayList) as a new Customer and the User can do several operations. It seems straightforward at first but I think my conception is not correct. Is it correct to send the whole Bank by RMI ? Because at first I thought Bank would be the server but I cannot find another way to do that. Currently, the Client GUI asks for a login and a password, and receives the Bank by RMI. A User is characterized by a name and a hash of the password. private String name; private byte[] passwordDigest; In fact the GUI is doing every security checking and I don't know if it's relevant. When you type login//password, it will search the login in the Bank and compare the hash of the password. In fact I have the impression that the Client knows too much information because when you have the Bank you have everything.. Does it seem correct or do I need to change my implementation ?

    Read the article

  • Renaming Files & Folders in Windows 7

    - by Mart
    This has been happening to me for some time now. I'm currently running Windows 7 RTM. When I try to rename a new folder that is created (by copying usually), I usually get a "Folder is in use" error. I would have to wait a while before I would be able to modify the folder name. Files behave the same way too. I am thinking that this is because I am sharing the folder, or library, via Homegroup Sharing. Is anyone who is running Windows 7 RTM experience this as well? EDIT 1: I'm using Windows 7 Ultimate.

    Read the article

  • FLEX, Tile container: how to better organize the children

    - by Patrick
    hi, I'm using as container for my LinkButtons. I would like to know 1) how can I remove the space between the items in my Tile container. 2) how can I set dynamic width for my items (at the moment they all have the same width regardless the width of the included component) 3) how can I avoid to display scrollbars if the items are not included in the container Thanks

    Read the article

  • how to organize classes in ruby if they are literal subclasses

    - by RetroNoodle
    I know that title didn't make sense, Im sorry! Its hard to word what I am trying to ask. I had trouble googling it for the same reason. So this isn't even Ruby specific, but I am working in ruby and I am new to it, so bear with me. So you have a class that is a document. Inside each document, you have sentences, and each sentence has words. Words will have properties, like "noun" or a count of how many times they are used in the document, etc. I would like each of the elements, document, sentence, word be an object. Now, if you think literally - sentences are in documents, and words are in sentences. Should this be organized literally like this as well? Like inside the document class you will define and instantiate the sentence objects, and inside the sentence class you will define and instantiate the words? Or, should everything be separate and reference each other? Like the word class would sit outside the sentence class but the sentence class would be able to instantiate and work with words? This is a basic OOP question I guess, and I suppose you could argue to do it either way. What do you guys think? Each sentence in the document could be stored in a hash of sentence objects inside the document object, and each word in the sentence could be stored in a hash of word objects inside the sentence. I dont want to code myself into a corner here, thats why I am asking, plus I have wondered this before in other situations. Thank you!

    Read the article

  • How to insert large files in mysql database using php? [closed]

    - by anjan
    Hi! I want to upload a large file of size 10M max to my mysql database. Using .htaccess i changed the PHP's own file upload limit to "10485760" = 10M, i am able to upload files upto 10M size without any problem. But i can not insert the file in database if it is more that 1M in size. i am using file_get_contents to read all file data and pass it to the insert query as a string to be inserted into a LONGBLOB field. But files with more than 1M size is not being added to database, though i can use print_r($_FILES) to examine that the file uploaded correctly. Any help will be appreciated and i will need it within next 6 hours. So, please help! best regards, Anjan * This is a duplicate of http://stackoverflow.com/questions/492549/how-can-i-insert-large-files-in-mysql-db-using-php *

    Read the article

  • Entire folders deleted from My Documents periodically with remote logins

    - by darron
    I've got a customer who thinks our application is constantly deleting all it's data. It's really becoming a major problem for them. The problem is, there's no way it's us. They are losing not only our entire data folder (which we locate in the user's "My Documents" folder to make it easy to find), but some local settings files which are in entirely different places within the general user profile. It REALLY looks like the entire user is either getting reset, or is somehow synchronizing with a more... blank profile somewhere else. They're running this on some kind of virtualized Citrix guest OS. I see references to a "group policy folder redirection" that could do this... maybe roaming profiles? Any ideas? Help!

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Moving screenshots from 1 folder to another instantly

    - by Frank
    I am hosting a gaming site and once in a while the server automatically creates a screenshot in the servers folder. Unfortunatly it is NOT configurable in the server settings where it puts these screenshots, it just allways dumps the PNG file in the server config folder. My folder structure is for example as follows: /home/Game/Server1/ now, what I would like to achieve is that once the server creates a screenshot in 1 of these server folders (I have multiple), the operating system moves the screenshot IMMEDIATLY (which is ALLWAYS a *.png file) to the webserver folder, for example: /var/www/Server1/filename.png So that players can see the screenshot on the website. Anyone any idea on how I can tackle this problem the smartest way? Please note that my ideal situation would be if the PNG file is moved immediatly after creation. Thanks for your help. Frank

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • Not able to add PC and synchronise files with Ubuntu One

    - by Ryan Hawthorne
    I have tried to add my PC to my Ubuntu One account 3 or 4 times: while I log in successfully in the Ubuntu One interface (after deleting my Ubuntu One password in 'passwords'), my folders don't give me an option to synchronise. I don't get the pink bar saying 'these folders cannot be synchronised'; I get nothing at all, no option. The second time I tried it seemed to work – but then it stopped working again.

    Read the article

  • "0x80004005: Unspecified error" when renaming folders in Windows 7

    - by Michael Steele
    Setup I've just installed Windows 7 Ultimate 64bit. The operating system is installed on an 80GB Intel X-25M. I have a secondary 500GB Barracuda 7200.12 to be used for storage. This second drive is mounted as 'G'. Problem I'm getting an error whenever I try to rename a folder within this drive. The error says: An unexpected error is keeping you from renaming the folder. If you continue to receive this error, you can use the error code to search for help with this problem. Error 0x80004005: Unspecified error Clicking the "Try Again" button gets past the error, and correctly renames the folder. If I create a folder and leave it's name as "New Folder", then I don't see an error. I'm also able to manipulate files however I want without seeing an error. Things I've Tried Changing permissions on the drive Reformatting the drive. I've tried using both a "Basic" and "Dynamic" partition table. Checking the drive for errors using Microsoft tools Things I've Not Yet Tried When I first installed Windows I was given a choice between three types of partitions to use. I no longer remember what the choices were or what I selected. One was a GUID partition table, one was a traditional MBR, and I can't remember the third option.

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

  • Access denied when trying to open files/folders after reinstall [closed]

    - by user711532
    Possible Duplicate: Access Denied when saving a file in Windows 7 I installed Windows 7 fresh on a new machine. Now when I unarchive (winrar or 7z etc. ) to Program files (x86) (for example), access denied. Even if I copy a file to a folder I installed an app to it is still access denied. I checked the security, it looks like full control is given to the creator - this is weird as I never ran across this before (same version of Windows 7 - its just a fresh install after some new hardware). It is the same effect as if I was editing the hosts file, and you do not use "run as admin" you will not be able to save it, yo will have to save it somewhere else. This "file copy" issue I ran into is the same. I could change all these permissions, however this is something I never had to do before. I am the admin, why did the install, not give me "full control"? How can this be globally fixed. I cannot change the permissions - they are greyed out - so that is weird as well. If I was a standard reason, It would make sense, however, again, I am the admin.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • How to count differences between two files on linux?

    - by Zsolt Botykai
    Hi all, I need to work with large files and must find differences between two. And I don't need the different bits, but the number of differences. For the differ rows I come up with diff --suppress-common-lines --speed-large-files -y File1 File2 | wc -l And it works, but is there a better way to do it? And how to count the exact number of differences (with standard tools like bash, diff, awk, sed some old version of perl)? Thanks in advance

    Read the article

  • Problem with files consists of spaces and single quotes?

    - by Vijay
    I'using the following code to create thumbnails using ffmpeg but it was working fine for the files which have no spaces or any quotes.. But when the file has a space (like 'sachin knock.flv') or files which have quotes (like sachin's_double_cent.mp4) it doesn't work.. What can i do to get those files work accurately? One restriction is that i can't rename files as they are lump some.. My code is <?php error_reporting(E_ALL); extension_loaded('ffmpeg') or die('Error in loading ffmpeg'); $link = mysql_connect('localhost', 'root', ''); if (!$link) { die('Not connected : ' . mysql_error()); } $db_selected = mysql_select_db('db', $link); $max_width = 120; $max_height = 72; $path ="/home/rootuser/public_html/temp/"; $qry="select id, input_file, output_file from videos where thumbnail='' or thumbnail is null;"; $res=mysql_query($qry); while($row = mysql_fetch_array($res,MYSQL_ASSOC)) { $orig_str = array(" "); $rep_str = array("\ "); $outfile = $row[output_file]; // $infile = $row[input_file]; $infile1 = str_replace($orig_str, $rep_str, $outfile); $tmp = explode(".",$infile1); $tmp_name = $tmp[0]; $imgname = $tmp_name.".png"; $srcfile = "/home/rootuser/public_html/uploaded_vids/".$outfile; echo exec("ffmpeg -i ".$srcfile." -r 1 -ss 00:00:05 -f image2 -s 120x72 ".$path.$imgname); $nname = "./temp/".$imgname; $fileo = fopen($nname,"rb"); if($fileo) { $imgData = addslashes(file_get_contents($nname)); echo $imgdata; $qryy="update videos set thumbnail='{$imgData}' where input_file='$outfile'"; $ress=mysql_query($qryy); } else echo "Could not open<br><br>"; unlink('$nname'); } ?>

    Read the article

  • How to combine ASCII text files, then encrypt, then decrypt, and put into a 'File' Class? C++

    - by Joel
    For example, if I have three ASCII files: file1.txt file2.txt file3.txt ...and I wanted to combine them into one encrypted file: database.txt Then in the application I would decrypt the database.txt and put each of the original files into a 'File' class on the heap: class File{ public: string getContents(); void setContents(string data); private: string m_data; }; Is there some way to do this? Thanks

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >