Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 26/1584 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • SQLCe local db in temp- path in connectionstring?

    - by Petr
    Hi, I have SQL Ce db in my app, which is included in my app directory. While debugging its OK, but when published and run with setup.exe, it retrieves "file not found" in temporary directory the app is ran from. I would like to run from standard location, but I dont know how to change it. I am using this string: SqlCeConnection connection = new SqlCeConnection("Data Source=database.sdf;Persist Security Info=False;"); When I run setup.exe, the app never starts, stating that in its temporary directory the db file was not found. When I run app.exe, it works. I do not understand it...:( EDIT: I can see that in the VS project settings, there is connection string and there is "Data Source=|DataDirectory|\Database.sdf" The path should be something like local directory? Thanks!

    Read the article

  • trying to prune down a list of files

    - by romunov
    I have a list of files and I'm trying to extract all layer1_*.grd files. Is there a way of doing this in one grep expression? lof <- c("layer1_1.grd", "layer1_1.gri", "layer1_2.grd", "layer1_2.gri", "layer1_3.grd", "layer1_3.gri", "layer1_4.grd", "layer1_4.gri", "layer1_5.grd", "layer1_5.gri", "layer2_1.grd", "layer2_1.gri", "layer2_2.grd", "layer2_2.gri", "layer2_3.grd", "layer2_3.gri", "layer2_4.grd", "layer2_4.gri", "layer2_5.grd", "layer2_5.gri", "layer3_1.grd", "layer3_1.gri", "layer3_2.grd", "layer3_2.gri", "layer3_3.grd", "layer3_3.gri", "layer3_4.grd", "layer3_4.gri", "layer3_5.grd", "layer3_5.gri", "layer4_1.grd", "layer4_1.gri", "layer4_2.grd", "layer4_2.gri", "layer4_3.grd", "layer4_3.gri", "layer4_4.grd", "layer4_4.gri", "layer4_5.grd", "layer4_5.gri" I tried doing this in two steps: list.of.files <- list.files(pattern = c("1_")) list.of.files <- list.of.files[grep(".grd", list.of.files)] Can someone enlighten me how to do this with grep in one step? I naively tried passing list() and c() to the grep but, as you can imagine, it doesn't work. list.of.files <- list.files() list.of.files <- list.of.files[grep(list("1_", ".grd"), list.of.files)]

    Read the article

  • how to copy database files from the network access server to Client PC in c#.net?

    - by zoya
    im using a code to copy the files from the database of server PC. so im accessing that server PC through IP address but it is giving me error and not copying the files in the folder of my PC (client PC) this is my code that im using...can u tell me where im wrong?? the file path is given on my listview in winform.. public string RecordingFileCopy(string recordpath,string ipadd) { string strFinalPath; strFinalPath = String.Format("\\{0}'{1}'",ipadd,recordpath); return strFinalPath; } on button click event.... { try { foreach (ListViewItem item in listView1.Items) { string sourceFile = item.SubItems[5].Text; RecordingFileCopy(sourceFile,"10.0.4.123"); File.Copy(sourceFile, Path.Combine(@"E:\name\MyDir", Path.GetFileName(sourceFile))); } } catch { MessageBox.Show("Files are not copied to folder", _strMsg, MessageBoxButtons.OK, MessageBoxIcon.Error); } }

    Read the article

  • SQL with HAVING and temp table not working in Rails

    - by chrisrbailey
    I can't get the following SQL query to work quite right in Rails. It runs, but it fails to do the "HAVING row_number = 1" part, so I'm getting all the records, instead of just the first record from each group. A quick description of the query: it is finding hotel deals with various criteria, and in particular, priortizing them being paid, and then picking the one with the highest dealrank. So, if there are paid deal(s), it'll take the highest one of those (by dealrank) first, if no paid deals, it takes the highest dealrank unpaid deal for each hotel. Using MAX(dealrank) or something similar does not work as a way to pick off the first row of each hotel group, which is why I have the enclosing temptable and the creation of the row_number column. Here's the query: SELECT *, @num := if(@hid = hotel_id, @num + 1, 1) as row_number, @hid := hotel_id as dummy FROM ( SELECT hotel_deals.*, affiliates.cpc, (CASE when affiliates.cpc 0 then 1 else 0 end) AS paid FROM hotel_deals INNER JOIN hotels ON hotels.id = hotel_deals.hotel_id LEFT OUTER JOIN affiliates ON affiliates.id = hotel_deals.affiliate_id WHERE ((hotel_deals.percent_savings = 0) AND (hotel_deals.booking_deadline = ?)) GROUP BY hotel_deals.hotel_id, paid DESC, hotel_deals.dealrank ASC) temptable HAVING row_number = 1 I'm currently using Rails' find_by_sql to do this, although I've also tried putting it into a regular find using the :select, :from, and :having parts (but :having won't get used unless you have a :group as well). If there is a different way to write this query, that'd be good to know too. I am using Rails 2.3.5, MySQL 5.0.x.

    Read the article

  • Temp modification of NHibernate Entities

    - by Marty Trenouth
    Is there a way I can tell Nhibernate to ignore any future changes on a set of objects retrieved using it? public ReturnedObject DoIt() { List<MySuperDuperObject> awesomes = repository.GetMyAwesomenesObjects(); var sp = new SuperParent(); BusinessObjectWithoutNHibernateAccess.ProcessThese(i, awesomes,sp) repository.save(sp); return i; } public ReturnedObject FakeIt() { List<MySuperDuperObject> awesomes = repository.GetMyAwesomenesObjects(); var sp = new SuperParent(); // should something go here to tell NHibernate to ignore changes to awesomes and sp? return BusinessObjectWithoutNHibernateAccess.ProcessThese(awesomes,sp) }

    Read the article

  • how to use oracle package to get rid of Global Temp table

    - by john
    I have a sample query like below: INSERT INTO my_gtt_1 (fname, lname) (select fname, lname from users) In my effort to getting rid of temporary tables I created a package: create or replace package fname_lname AS Type fname_lname_rec_type is record ( fname varchar(10), lname varchar(10) ); fname_lname_rec fname_lname_rec_type Type fname_lname_tbl_type is table of fname_lname_rec_type; function fname_lname_func ( v_fnam in varchar2, v_lname in varchar2 )return fname_lname_tbl_type pipelined; being new to oracle...creating this package took a long time. but now I can not figure out how to get rid of the my_gtt_1 how can i say... INSERT INTO <newly created package> (select fnma, name from users)

    Read the article

  • How to insert large files in mysql database using php? [closed]

    - by anjan
    Hi! I want to upload a large file of size 10M max to my mysql database. Using .htaccess i changed the PHP's own file upload limit to "10485760" = 10M, i am able to upload files upto 10M size without any problem. But i can not insert the file in database if it is more that 1M in size. i am using file_get_contents to read all file data and pass it to the insert query as a string to be inserted into a LONGBLOB field. But files with more than 1M size is not being added to database, though i can use print_r($_FILES) to examine that the file uploaded correctly. Any help will be appreciated and i will need it within next 6 hours. So, please help! best regards, Anjan * This is a duplicate of http://stackoverflow.com/questions/492549/how-can-i-insert-large-files-in-mysql-db-using-php *

    Read the article

  • Strange behaviour of CUDA kernel

    - by username_4567
    I'm writing code for calculating prefix sum. Here is my kernel __global__ void prescan(int *indata,int *outdata,int n,long int *sums) { extern __shared__ int temp[]; int tid=threadIdx.x; int offset=1,start_id,end_id; int *global_sum=&temp[n+2]; if(tid==0) { temp[n]=blockDim.x*blockIdx.x; temp[n+1]=blockDim.x*(blockIdx.x+1)-1; start_id=temp[n]; end_id=temp[n+1]; //cuPrintf("Value of start %d and end %d\n",start_id,end_id); } __syncthreads(); start_id=temp[n]; end_id=temp[n+1]; temp[tid]=indata[start_id+tid]; temp[tid+1]=indata[start_id+tid+1]; for(int d=n>>1;d>0;d>>=1) { __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; temp[bi]+=temp[ai]; } offset*=2; } if(tid==0) { sums[blockIdx.x]=temp[n-1]; temp[n-1]=0; cuPrintf("sums %d\n",sums[blockIdx.x]); } for(int d=1;d<n;d*=2) { offset>>=1; __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; int t=temp[ai]; temp[ai]=temp[bi]; temp[bi]+=t; } } __syncthreads(); if(tid==0) { outdata[start_id]=0; } __threadfence_block(); __syncthreads(); outdata[start_id+tid]=temp[tid]; outdata[start_id+tid+1]=temp[tid+1]; __syncthreads(); if(tid==0) { temp[0]=0; outdata[start_id]=0; } __threadfence_block(); __syncthreads(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=1;i<gridDim.x;i++) { sums[i]=sums[i]+sums[i-1]; } } __syncthreads(); __threadfence(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=0;i<gridDim.x;i++) { cuPrintf("****sums[%d]=%d ",i,sums[i]); } } __syncthreads(); __threadfence(); if(blockIdx.x!=gridDim.x-1) { int tid=(blockIdx.x+1)*blockDim.x+threadIdx.x; if(threadIdx.x==0) cuPrintf("Adding %d \n",sums[blockIdx.x]); outdata[tid]+=sums[blockIdx.x]; } __syncthreads(); } In above kernel, sums array will accumulate prefix sum per block and and then first thread will calculate prefix sum of this sum array. Now if I print this sum array from device side it'll show correct results while in cuPrintf("Adding %d \n",sums[blockIdx.x]); this line it prints that it is taking old value. What could be the reason?

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Customize the Five Windows Folder Templates

    - by Mark Virtue
    Are you’re particular about the way Windows Explorer presents each folder’s contents? Here we show you how to take advantage of Explorer’s built-in templates, which cuts down the time it takes to do customizations. Note: The techniques in this article apply to Windows XP, Vista, and Windows 7. When opening a folder for the first time in Windows Explorer, we are presented with a standard default view of the files and folders in that folder. It may be that the items are presented are perfectly fine, but on the other hand, we may want to customize the view.  The aspects of it that we can customize are the following: The display type (list view, details, tiles, thumbnails, etc) Which columns are displayed, and in which order The widths of the visible columns The order in which the files and folders are sorted Any file groupings Thankfully, Windows offers us a shortcut.  A particular folder’s settings can be used as a “template” for other, similar folders.  In fact, we can store up to five separate sets of folder presentation configurations.  Once we save the settings for a particular template, that template can then be applied to other folders. Customize Your First Folder We’ll start by setting up the first of our templates – the default one.  Once we create this template and apply it, the vast majority of the folders in our file system will change to match it, so it’s important that we set it up very carefully.  The first step in creating and applying the template is to customize one folder with the settings that all the rest will have. Choose a folder that is typical of the folders that you wish to have this default template.  Select it in Windows Explorer.  To ensure that it is a suitable candidate, right-click the folder name and select Properties, then go to the Customize tab.  Ensure that this folder is marked as General Items.  If it is not, either choose a different folder or select General Items from the list. Click OK.  Now we’re ready to customize our first folder. Changing the way one single folder is presented is straightforward.  We start with the folder’s display type.  Click the Change your view button in the top-right corner of every Explorer window. Each time you click the button, the folder’s view cycles to the next view type.  Alternatively you can click the little down-arrow next to the button to see all the display types at once, and select the one you want. Click the view you want, or drag the slider next to the one you want. If you have chosen Details, then the next thing you may wish to change is which columns are displayed, and the order of these.  To choose which columns are displayed, simply right-click on any column heading.  A list of the columns currently being display appears. Simply uncheck a column if you don’t want it displayed, and check the columns that you want displayed.  If you want some information displayed about your files that is not listed here, then click the More… button for a full list of file attributes. There’s a lot of them! To change the order of the columns that are currently being displayed, simply click on a column heading and drag it to where you think it should be.  To change the width of a column, click the line that represents the right-hand edge of the column and drag it left or right. To sort by a column, click once on that column.  To reverse the sort-order, click that same column again. To change the groupings of the files in the folder, right-click in a blank area of the folder, select Group by, and select the appropriate column. Apply This Default Template to All Similar Folders Once you have the folder exactly the way you want it, we now use this folder as our default template for most of the folders in our file system.  To do this, ensure that you are still in the folder you just customized, and then, from the Organize menu in Explorer, click on Folder and search options. Then select the View tab and click the Apply to Folders button. After you’ve clicked OK, visit some of the other folders in your file system.  You should see that most have taken on these new settings. What we’ve just done, in effect, is we have customized the General Items template.  This is one of five templates that Windows Explorer uses to display folder contents.  The five templates are called (in Windows 7): General Items Documents Pictures Music Videos When a folder is opened, Windows Explorer examines the contents to see if it can automatically determine which folder template to use to display the folder contents.  If it is not obvious that the folder contents falls into any of the last four templates, then Windows Explorer chooses the General Items template.  That’s why most of the folders in your file system are shown using the General Items template. Changing the Other Four Templates If you want to adjust the other four templates, the process is very similar to what we’ve just done.  If you wanted to change the “Music” template, for example, the steps would be as follows: Select a folder that contains music items Apply the existing Music template to the folder (even if it doesn’t look like you want it to) Customize the folder to your personal preferences Apply the new template to all “Music” folders A fifth step would be:  When you open a folder that contains music items but is not automatically displayed using the Music template, you manually select the Music template for that folder. First, select a folder that contains music items.  It will probably be displayed using the existing Music template: Next, ensure that it is using the Music template.  If it’s not, then manually select the Music template. Next, customize the folder to suit your personal preferences (here we’ve added a couple of columns, and sorted by Artist). Now we can set this view to be our Music template.  Choose Organize, then the View tab, and click the Apply to Folders button. Note: The only folders that will inherit these settings are the ones that are currently (or will soon be) using the Music template. Now, if you have any folder that contains music items, and you want it to inherit all of these settings, then right-click the folder name, choose Properties, and select that this folder should use the Music template.  You can also cehck the box entitled Also apply this template to all subfolders if you want to save yourself even more time with all the sub-folders. Conclusion It’s neat to be able to set up templates for your folder views like this.  It’s a shame that Microsoft didn’t take the concept just a little further and allow you to create as many templates as you want. Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesCustomize the Windows 7 or Vista Send To MenuFix for New Contact Group Button Not Displaying in VistaWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Make Your Last Minute Holiday Cards with Microsoft Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • How to count differences between two files on linux?

    - by Zsolt Botykai
    Hi all, I need to work with large files and must find differences between two. And I don't need the different bits, but the number of differences. For the differ rows I come up with diff --suppress-common-lines --speed-large-files -y File1 File2 | wc -l And it works, but is there a better way to do it? And how to count the exact number of differences (with standard tools like bash, diff, awk, sed some old version of perl)? Thanks in advance

    Read the article

  • Concatenation of files using ffmpeg does not work as I expected. Why?

    - by Peter Olson
    I execute the following command line for ffmpeg.exe -i C:\Beema\video-source\DO_U_BEEMA176x144short.avi -i C:\Beema\video-source\DO_U_BEEMA176x144short.avi -i C:\Beema\temp\9016730-51056331-stitcheds.avi -i C:\Beema\video-source\GOTTA_BEEMA176x144short.avi -y -ac 1 -r 24 -b 25K C:\Beema\video-out\9a062fb6-d448-48fe-b006-a85d51adf8a1.mpg The output file in video-out ends up having a single copy of DO_U_BEEMA. I do not understand why ffmpeg is not concatenating. Any help is dramatically appreciated,

    Read the article

  • Problem with files consists of spaces and single quotes?

    - by Vijay
    I'using the following code to create thumbnails using ffmpeg but it was working fine for the files which have no spaces or any quotes.. But when the file has a space (like 'sachin knock.flv') or files which have quotes (like sachin's_double_cent.mp4) it doesn't work.. What can i do to get those files work accurately? One restriction is that i can't rename files as they are lump some.. My code is <?php error_reporting(E_ALL); extension_loaded('ffmpeg') or die('Error in loading ffmpeg'); $link = mysql_connect('localhost', 'root', ''); if (!$link) { die('Not connected : ' . mysql_error()); } $db_selected = mysql_select_db('db', $link); $max_width = 120; $max_height = 72; $path ="/home/rootuser/public_html/temp/"; $qry="select id, input_file, output_file from videos where thumbnail='' or thumbnail is null;"; $res=mysql_query($qry); while($row = mysql_fetch_array($res,MYSQL_ASSOC)) { $orig_str = array(" "); $rep_str = array("\ "); $outfile = $row[output_file]; // $infile = $row[input_file]; $infile1 = str_replace($orig_str, $rep_str, $outfile); $tmp = explode(".",$infile1); $tmp_name = $tmp[0]; $imgname = $tmp_name.".png"; $srcfile = "/home/rootuser/public_html/uploaded_vids/".$outfile; echo exec("ffmpeg -i ".$srcfile." -r 1 -ss 00:00:05 -f image2 -s 120x72 ".$path.$imgname); $nname = "./temp/".$imgname; $fileo = fopen($nname,"rb"); if($fileo) { $imgData = addslashes(file_get_contents($nname)); echo $imgdata; $qryy="update videos set thumbnail='{$imgData}' where input_file='$outfile'"; $ress=mysql_query($qryy); } else echo "Could not open<br><br>"; unlink('$nname'); } ?>

    Read the article

  • How to combine ASCII text files, then encrypt, then decrypt, and put into a 'File' Class? C++

    - by Joel
    For example, if I have three ASCII files: file1.txt file2.txt file3.txt ...and I wanted to combine them into one encrypted file: database.txt Then in the application I would decrypt the database.txt and put each of the original files into a 'File' class on the heap: class File{ public: string getContents(); void setContents(string data); private: string m_data; }; Is there some way to do this? Thanks

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >