Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 50/1640 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How can i manage my personal notes , code snippets files in one place online [closed]

    - by user1758043
    Whenever i work on any project , then i have so much notes , diagrams files , image s, brainstorming ideas which i want to keep. i want to put them in one place so that i can see the history of my work. Is there any toll whichere i can store this online. my company is using confluence but thats costly for me. I want something for single user but online in clou where i can store Notes Code snippets Diagrams , flowchart Attah files , images Books marks , sites

    Read the article

  • How to Restore Your Files From the Windows.old Folder After Upgrading

    - by Taylor Gibb
    If you have ever upgraded your Windows installation without formatting, you have probably come across the Windows.old folder which houses all the files from your previous installation. Here’s how to use it to restore your files. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • terminal failed to fetch and some index files failed to download

    - by firstson
    My terminal failed to fetch, and some index files failed to download: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/Release.gpg Something wicked happened resolving 'security.ubuntu.com:http' (-5 - No address associated with hostname) E: Some index files failed to download. They have been ignored, or old ones used instead. please help me to solve the problem in my terminal. I really appreciate the solution.

    Read the article

  • Nautilus copies files but not directories from samba server on OS X

    - by camsolo
    I need to copy files from OS X on a Mac to Ubuntu on a PC. I enabled File Sharing on the Mac and selected the samba option. I was then able to connect to the samba server in Nautilus. Copying files was no problem, but copying directories failed, with the error message "is a directory". I can accomplish what I want using samba in the terminal. But it would be more conveninient to do this in Nautilus.

    Read the article

  • "t-sql" tool for csv files?

    - by adolf garlic
    Is there a tool that allows to query files based on the following criteria, and change them? creation date filenames (e.g. filetypeA_YYYYMMDD_data2.blah) file content For the example, the workflow could be the following one: Get all files created between date x and date y, where the filename contains z and the file itself contains value x under column zzz. Update it to be a different value/another column.

    Read the article

  • Using a PowerShell Script to delete old files for SQL Server

    Many clients are using custom stored procedures or third party tools to backup databases in production environments instead of using database maintenance plans. One of the things that you need to do is to maintain the number of backup files that exist on disk, so you don't run out of disk space. There are several techniques for deleting old files, but in this tip I show how this can be done using PowerShell.

    Read the article

  • List and Read Text Files

    This sample takes files from a folder (in this situation, text files), and lists them in a listbox. Then, the user can click on a particular file, and display the text from the selected text file, in the textbox to the right of the listbox.

    Read the article

  • dun goofed the files in /usr/lib/x86_64-linux-gnu/

    - by tipu
    there was some weird package issue with (in my limited understanding) 32/64 bit libraries, so i went around making symlinks to the file my lamp installation expected to the ones that actually existed. i did this for a # of files in here: /usr/lib/x86_64-linux-gnu/ however php still ended up not working (separate issue) and now i believe i have a screwed up lib directory. is there a way to revert those library files?

    Read the article

  • Google Drive SDK: Sharing files and managing permissions

    Google Drive SDK: Sharing files and managing permissions During this session we'll explain how to use the Google Drive SDK to manage permissions and sharing settings of files. We'll go through the various permission types, roles and values and show to easily embed the Google Drive sharing dialog in your app. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • apt-get -f install removed software center and several other files

    - by user287858
    I ran sudo apt-get -f install and several files and programs were removed including software center. Is there a way to re-download everything as if ubuntu was new again without a cd? This computer does not have a cd-rom drive. I'd be fine with losing all the data on this computer. Also, when I run sudo apt-get install (almost anything) I get errors about dependencies and files not being available. Thanks to anyone who can help.

    Read the article

  • Check SQL Server Virtual Log Files Using PowerShell

    In a previous tip on Monitor Your SQL Server Virtual Log Files with Policy Based Management, we have seen how we can use Policy Based Management to monitor the number of virtual log files (VLFs) in our SQL Server databases. However, even with that most of the solutions I see online involve the creation of temporary tables and/or a combination of using cursors to get the total number of VLFs in a transaction log file. Is there a much easier solution?

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Orca: extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I looked up Orca (version 3.1), whose documentation states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines: orca -x theDirectory theModule.msm orca theModule.msm -x theDirectory ...and others. I get nowhere. The closest I've gotten was this: orca -q -x theDirectory -m theModule.msm ...but then it complains that I didn't specifiy a database to merge into. But I'm not trying to merge anything, no less into a database. I just want the files extracted. Can someone explain what I'm doing wrong with the command line options?

    Read the article

  • Delete duplicate records from a SQL table without a primary key

    - by Shyju
    I have the below table with the below records in it create table employee ( EmpId number, EmpName varchar2(10), EmpSSN varchar2(11) ); insert into employee values(1, 'Jack', '555-55-5555'); insert into employee values (2, 'Joe', '555-56-5555'); insert into employee values (3, 'Fred', '555-57-5555'); insert into employee values (4, 'Mike', '555-58-5555'); insert into employee values (5, 'Cathy', '555-59-5555'); insert into employee values (6, 'Lisa', '555-70-5555'); insert into employee values (1, 'Jack', '555-55-5555'); insert into employee values (4, 'Mike', '555-58-5555'); insert into employee values (5, 'Cathy', '555-59-5555'); insert into employee values (6 ,'Lisa', '555-70-5555'); insert into employee values (5, 'Cathy', '555-59-5555'); insert into employee values (6, 'Lisa', '555-70-5555'); I dont have any primary key in this table .But i have the above records in my table already. I want to remove the duplicate records which has the same value in EmpId and EmpSSN fields. Ex : Emp id 5 Can any one help me to frame a query to delete those duplicate records Thanks in advance

    Read the article

  • Duplicate a collection of entities and persist in Hibernate/JPA

    - by Michael Bavin
    Hi, I want to duplicate a collection of entities in my database. I retreive the collection with: CategoryHistory chNew = new CategoryHistory(); CategoryHistory chLast = (CategoryHistory)em.createQuery("SELECT ch from CategoryHistory ch WHERE ch.date = MAX(date)").getSingleResult; List<Category> categories = chLast.getCategories(); chNew.addCategories(categories)// Should be a copy of the categories: OneToMany Now i want to duplicate a list of 'categories' and persist it with EntityManager. I'm using JPA/Hibernate. UPDATE After knowing how to detach my entities, i need to know what to detach: current code: CategoryHistory chLast = (CategoryHistory)em.createQuery("SELECT ch from CategoryHistory ch WHERE ch.date=(SELECT MAX(date) from CategoryHistory)").getSingleResult(); Set<Category> categories =chLast.getCategories(); //detach org.hibernate.Session session = ((org.hibernate.ejb.EntityManagerImpl) em.getDelegate()).getSession(); session.evict(chLast);//detaches also its child-entities? //set the realations chNew.setCategories(categories); for (Category category : categories) { category.setCategoryHistory(chNew); } //set now create date chNew.setDate(Calendar.getInstance().getTime()); //persist em.persist(chNew); This throws a failed to lazily initialize a collection of role: entities.CategoryHistory.categories, no session or session was closed exception. I think he wants to lazy load the categories again, as i have them detached. What should i do now?

    Read the article

  • Exporting from SSRS 2008 ReportViewer to Excel Causes Duplicate Columns

    - by Daniel Coffman
    I have a report that groups months by quarters, so each quarter has three months and the display of the months under the quarter is toggled by the quarter header. It looks just fine in the ReportViewer, but when exporting to Excel the first month in the quarter with data is duplicated and appended to the end of the quarter group. Here is what it looks like in the ReportViewer (with Quarters 2 and 4 expanded, note May and June do not have any data and show blank columns by design): http://i.imgur.com/MykZE.png This is how it looks when exported to Excel: http://i.imgur.com/zfLuk.png The collapsed Quarter should only show the LAST month in the quarter. You can see that in the Excel export July is inserted in Q1 even though it should be hidden entirely since that quarter is collapsed, December is appended to Q2, January is inserted into Q3, and April is duplicated and appended to Q4. Exporting the any format OTHER than Excel works correctly and does not insert these columns. A similar bug for rows was filed and marked as "by design": http://connect.microsoft.com/SQLServer/feedback/details/508823/reporting-services-2008-group-by-export-to-excel-duplicate-rows-csv-ok-pdf-ok How do I stop the export to Excel feature from inserting these duplicate columns?

    Read the article

  • PHP count total files in directory AND subdirectory function

    - by Neoweiter
    I need to get a total count of JPG files within a specified directory, including ALL it's subdirectories. No sub-sub directories. Structure looks like this : dir1/ 2 files subdir 1/ 8 files total dir1 = 10 files dir2/ 5 files subdir 1/ 2 files subdir 2/ 8 files total dir2 = 15 files I have this function, which doesn't work fine as it only counts files in the last subdirectory, and total is 2x more than the actual amount of files. (will output 80 if I have 40 files in the last subdir) public function count_files($path) { global $file_count; $file_count = 0; $dir = opendir($path); if (!$dir) return -1; while ($file = readdir($dir)) : if ($file == '.' || $file == '..') continue; if (is_dir($path . $file)) : $file_count += $this->count_files($path . "/" . $file); else : $file_count++; endif; endwhile; closedir($dir); return $file_count; }

    Read the article

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • Upgraded to Xcode 4 -- Endless stream of duplicate symbol errors causing build errors

    - by D-Nice
    Everything was working perfectly fine in Xcode 3 yesterday before I upgraded. So I completed the upgrade, restarted my computer, and opened my old project. I had to reconfigure a few settings like the header paths so that I could begin to compile. I'm using AdWhirl for ad mediation, and at this point my errors begin to read something like duplicate symbol _OBJC_METACLASS_$_SBJSON in /Users/Admin/Desktop/TMapLiteAdwhirl/AdWhirl/MMSDK/libMMSDK.a(SBJSON.o) and /Users/Admin/Library/Developer/Xcode/DerivedData/TruxMapLite-bgpylibztethnlhkfkdumpvrjvgy/Build/Intermediates/TruxMapLite.build/Debug-iphoneos/TruxMapLite.build/Objects-normal/armv6/SBJSON.o for architecture armv6 The library it's referring to is the SDK for one of the ad networks I'm including in AdWhirl. Both of the 'duplicate symbols' refer to the SAME FILE, but they use different paths. If I had still had XCode 3, I would simply try excluding these libraries from the build path, but I have no idea how that can be done in Xcode 4. I've tried everything all the way down to deleting the library and all associated files from my project, but when I do this, i will simply get the same type of error for a different library in the AdWhirl directory. This is incredibly frustrating because before my upgrade everything was working smoothly and I was prepared to submit my binary. If anyone has any advice, id be more than happy to give it a try. Thanks!

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

  • Error 'duplicate definition' when compiling 2 c files that reference 1 header file

    - by super newbie
    I have two C files and one header that are as follows: Header file header.h: char c = 0; file1.c: #include "header.h" file2.c: #include "header.h" I was warned about 'duplicate definition' when compiling. I understand the cause as the variable c is defined twice in both file1.c and file2.c; however, I do need to reference the header.h in both c files. How should I overcome this issue?

    Read the article

  • Configuring RAID1 on HP Proliant Microserver N54L / Ubuntu 14.04.1 LTS [duplicate]

    - by Chris Beach
    This question already has an answer here: Cant find my harddrives in ubuntu installation? 2 answers I've bought a N54L and fitted two 3GB drives. Keen to get set up with RAID1. BIOS: SATA controller mode set to "RAID" RAID Option ROM utility: both physical drives set up as one logical drive When I came to install Ubuntu (14.04.1), both drives appeared during the setup process. I was only expecting to see the logical drive, although I'm a complete novice with RAID. I've read that the HP Proliant Microservers don't have "proper" RAID support, and require some kind of driver to be installed. I've tried a few HP utilities from the following apt repo: deb http://downloads.linux.hp.com/SDR/repo/mcp wheezy/current non-free On installation, most say "server not supported" Would appreciate your advice.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >