Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 44/1640 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Windows Live SkyDrive: How To Move or Copy Files Between Folders

    - by Gopinath
    Microsoft has very simple and easy to use interface to move files between folders in Windows Operating system. But their own cloud storage service,Windows Live SkyDrive, complicated these simple and daily used operations. We need a guide to figure out how to perform basic copy/move operations. Couple of years ago we wrote about moving files between folders in old version of SkyDrive but the guide does not hold good today as SkyDrive has gone through many user interface changes in the recent past. Today one of our readers asked us how to move/copy files in the latest version of SkyDrive and here are the steps to be followed 1. Login to your Windows Live SkyDrive 2. Select the file you want to Move or Copy by clicking on the information icon (see 2 in below image) 3. After selecting the information icon, expand Information section displayed on the right side panel to access Move and Copy options (see 3 in the below image). 4. To move the selected file to another folder, select Move option and Sky Drive will guide you through folder selection user interface for choosing the target folder. 5. Once you navigate to the target folder where you want to move the file click on “Move this file into <<Target Folder>>”. 6. You are done. Dear Microsoft, SkyDrive provides us tonnes of free storage but please make it’s user interface a bit better so that we don’t need to write guides to perform basic operations. Hope you listen to your customers. This article titled,Windows Live SkyDrive: How To Move or Copy Files Between Folders, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Save Files Directly from Your Browser to the Cloud in Chrome and Iron

    - by Asian Angel
    Are you looking for a quicker, easier way to upload files you find while browsing to your favorite cloud services? Skip saving files to your hard-drive and transfer them directly from your browser to your accounts using the Cloud Save extension. You can see the cloud services currently supported in the screenshot above and more are being added all the time. So if your favorite is not listed yet just keep checking in at the extension’s homepage. Cloud Save [Google Chrome Extensions] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • String patterns that can be used to filter and group files

    - by Louis Rhys
    One of our application filters files in certain directory, extract some data from it and export a document from the extracted data. The algorithm for extracting the data depends on the file, and so far we use regex to select the algorithm to be used, for example .*\.txt will be processed by algorithm A, foo[0-5]\.xml will be processed by algo B, etc. However now we need some files to be processed together. For example, in one case we need two files, foo.*\.xml and bar.*\.xml. Part of the information to be extracted exist in the foo file, and the other part in the bar file. Moreover, we need to make sure the wild card is compatible. For example, if there are 6 files foo1.xml foo23.xml bar1.xml bar9.xml bar23.xml foo4.xml I would expect foo1 and bar1 to be identified as a group, and foo23 and bar23 as another group. bar9 and foo4 has no pair, so they will not be treated. Now, since the filter is configured by user, we need to have a pattern that can express the above requirement. I don't think you can express meaning like above in standard regex. (foo|bar).*\.xml will match all 6 file above and we can't identify which file is paired for a particular file. Is there any standard pattern that can express it? Or any idea how to modify regex to support this, that can be implemented easily?

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Very large log files, what should I do?

    - by Masroor
    (This question deals with a similar issue, but it talks about a rotated log file.) Today I got a system message regarding very low /var space. As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement. Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives, total 28G -rw-r----- 1 syslog adm 14G Aug 23 21:56 kern.log -rw-r----- 1 syslog adm 14G Aug 23 21:56 syslog -rw-rw-r-- 1 root utmp 390K Aug 23 21:47 wtmp -rw-r--r-- 1 root root 287K Aug 23 21:42 dpkg.log -rw-rw-r-- 1 root utmp 287K Aug 23 20:43 lastlog As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated. So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps? I am using Ubuntu 14.04.

    Read the article

  • Upgrade to 0.25, files served to uPNP devices cannot play

    - by David Buttrick
    I have a Sony BDP-S390 bluray & network player. I upgraded my Myth server to 0.25. When I browse to the Myth server, and try to play a recording, I get an error message about the file not being payable in the player. Interestingly, the files that i have recorded, and the videos that I have loaded into my Video volume group are .mpg or .mp4. The player shows the filetype that it thinks the file is in it's list, and it claims that these files are AVI files, however none of them are. They are all .mp4 or .mpg files. Thinking that that was just an optical illusion, I went ahead and tried to play a file, but I get an error about the file not being playable. First of all, is there something that I need to do to make the uPNP server know about different filetypes? Is it reporting AVI because it hasn't been told about MPG or MP4? Second, I'd like to help out some more here and collect some logging about the uPNP server in the myth server. I cant seem to find information on how to turn on logging, and there is no mythbackend settings file int /etc/default. Thanks very much.

    Read the article

  • All in a Day's Work: Unblocking Multiple Downloaded Files with a Single Command

    - by Sam Abraham
    Files downloaded using Internet Explorer retain Internet Zone permission level and hence are “Blocked” by default on Windows 7 machines. Honestly, while an added overhead for developers; I really appreciate this feature as it provides a good protection layer for casual web users. My workaround is to simply unblock the downloaded zip file (if download was a zip file) which, in turn, unblocks the files stored within. Today however, I was left with a situation where I had to “Open” and “Copy” the content rather than “Save” a zip file. That of course left me with a few dozen files I have to manually unblock. A few minutes of internet search lead me to the link below which worked like a charm: 1-Download streams.exe from SystInternals - http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx 2-Go to command prompt (cmd.exe) 3-Navigate to where you have streams.exe installed 4-Use command line switches: streams.exe –s –d “<folder path>” This removed the Internet Zone restrictions from all files under “<folder path>” and its subfolders as well. [Deleted :Zone.Identifier:$DATA] References: http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/806f0104-1caa-4a66-b504-7a681d1ccb33/

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • How to Avoid Duplicate Key Exception

    - by LifeH2O
    I am using TableAdapter to insert records in table within a loop. foreach(....) { .... .... teamsTableAdapter.Insert(_teamid, _teamname); .... } Where TeamID is the primary key in the table After first run of this loop, Insert throws Duplicate Primary Key found Exception. To handle this, i have done this foreach(....) { .... .... try { _teamsTableAdapter.Insert(_teamid, _teamname); } catch (System.Data.SqlClient.SqlException e) { if (e.Number != 2627) MessageBox.Show(e.Message); } .... .... } But using try catch statement is costly, how to avoid this exception. I am working in VS2010 and INSERT ... ON DUPLICATE KEY UPDATE does not work.

    Read the article

  • Scheme: Detecting duplicate elements in a list

    - by Kyle Krull
    Does R6RS or Chez Scheme v7.9.4 have a library function to check if a list contains duplicate elements? Alternatively, do either have any built in functionality for sets (which dis-allow duplicate elements)? So far, I've only been able to find an example here. The problem with that is that it doesn't appear to actually be part of the Chez Scheme library. Although I could write my own version of this, I'd much rather use a well known, tested, and maintained library function - especially given how basic an operation this is. So a simple "use these built-in functions" or a "no built-in library implements this" will suffice. Thanks!

    Read the article

  • Controllling Drupal's active/active-trail with duplicate menu items

    - by Mark
    I'm developing a site that requires some duplication of links within the menu: Section A -- Introduction -- Testimonials Section B -- Introduction -- Testimonials Testimonials -- Section A -- Section B So 'Section A Testimonials' and 'Testimonials Section A' point to the same node. But regardless of which menu link people use, I want the person to be in Section A. The problem is that D6 doesn't like duplicate menu items, and it assigns the active and active-trail classes rather unpredictably. So my thought was to create a placeholder node for each item in the Testimonials menu, and then set the URL to something like "testimonials/redirect/section-a", and then use mod_rewrite to redirect over to "section-a/testimonials". With this solution, I will have no duplicate paths in the menu. I'm just hoping this doesn't somehow hurt my SEO. Does anyone know a better solution?

    Read the article

  • jQuery.load() Retrieving partial page content causes duplicate ID in DOM

    - by Warren Buckley
    Hello all, I currently have a JS function that allows me to load partial content from another page into the current page using jQuery.load() However I noticed when using this that I get a duplicate ID in the DOM (when inspecting with FireBug) This function is used in conjuction with a Flash Building viewe so it does not contain any code to retrieve the URL from the anchor tag. function displayApartment(apartmentID) { $("#apartmentInfo").load(siteURL + apartmentID + ".aspx #apartmentInfo", function () { //re-do links loaded in $("a.gallery").colorbox({ opacity: "0.5" }); }); } The code above works just fine in retrieving the content, however it just bugs me that when inspecting with firebug I get something like this. <div id="apartmentInfo"> <div id="apartmentInfo"> <!-- Remote HTML here..... --> </div> </div> Can anyone please suggest anything on how to remove the duplicate div? Thanks, Warren

    Read the article

  • How to copy or duplicate gtk widgets?

    - by PP
    Hi, How to copy or duplicate gtk widgets? In my application I have one huge GtkComboBox created with one long for loop which eats up so much of time and I am using this combo at two places in one single screen. So, what I want to do is create this combo for one time and duplicate/copy it in another one so it will save my time. If I try to add same combo box pointer two times, gtk gives me error "child-paren != NULL" cause in gtk widget can have only single parent. So what to do? Thanks, PP.

    Read the article

  • prevent Duplicate values using Jquery Validation

    - by Yashwant Chavan
    I have form and form text field which generates dynamically using JSP. And I using Jquery validation but want to add functionlaty to prevent duplicate entry in the form. E.g. <form name="myForm" id="myForm"> <input type="text" name="text1" id="text1"> <input type="text" name="text2" id="text2"> <input type="text" name="text3" id="text3"> = = N number of form fields <input type="text" name="textn" id="textn"> </form> I want to check if there is any duplicate value entered in the textfield using jquery validation. Thanks

    Read the article

  • keep duplicate number records only - perl

    - by manu
    Hello I have one text string which is having some duplicate characters (FFGGHHJKL), these can be made unique by using the positive lookahead [perl script for the same$ perl -pe 's/(.)(?=.*?\1)//g']. (FFEEDDCCGG OUTPUT == FEDCG) My question is how to make it work on the numbers (Ex. 212 212 43 43 5689 6689 5689 71 81 === output should be 212 43 5689 6689 71 81) ? Also if we want to have only duplicate records to be given as the output from a file having n rows (212 212 43 43 5689 6689 5689 71 81 \n 66 66 67 68 69 69 69 71 71 52 ..\n .. .. \n... OUTPUT == 212 212 43 43 5689 5689 \n 66 66 69 69 69 71 71) then what should be done ? Thanks and regards -manu

    Read the article

  • Grails Duplicate Exception handling

    - by Srinath
    Hi, How to catch duplicate key exceptions in Grails . when trying to save existing integer for a unique column constraint, the error is generating while saving/updating a record . Also used try{ object.save(flush:true) }catch(org.springframework. dao.DataIntegrityViolationException e){ println e.message }catch(org.hibernate.exception.ConstraintViolationException ex){ println e.message }catch(Exception e){ println e.message } But unable to catch this issue . 23:41:13,265 ERROR [JDBCExceptionReporter:101] Duplicate entry '1' for key 2 23:41:13,281 ERROR [AbstractFlushingEventListener:324] Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:168) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) Could you please share the solution to solve this .

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • Service Contracts with Message causes duplicate proxy classes

    - by jaklucky
    Hi, I have a service contract with Message as shown below. [OperationContract] Message MyMethodWithMessage(Message myMsgParam); Everything works fine. I could host my services. But when I try to create proxies through "Add Service References", I am getting the duplicate proxy classes. If I take out the above OperationContract and re run my services and try to create proxies, then "Add Service References" does not provide duplicate proxies. I am really confused about this!!! Any help is greatly appreciated... Thank you, Suresh

    Read the article

  • Removing duplicate SQL records to permit a unique key

    - by j pimmel
    I have a table ('sales') in a MYSQL DB which should have rightfully have had a unique constraint enforced to prevent duplicates. To first remove the dupes and set the constraint is proving a bit tricky. Table structure (simplified): 'id (unique, autoinc)' product_id The goal is to enforce uniqueness for product_id. The de-duping policy I want to apply is to remove all duplicate records except the most recently created, eg: the highest id Or to put another way, I would like to delete duplicate records, excluding the ids matched by the following query: select id from sales s inner join (select product_id, max(id) as maxId from sales group by product_id having count(product_id) > 1) groupedByProdId on s.product_id and s.id = groupedByProdId.maxId I've struggled with this on two fronts - writing the query to select the correct records to delete and then also the constraint in MYSQL where a subselect FROM clause of a DELETE cannot reference the same table from which data is being removed.

    Read the article

  • Duplicate Prefix Error in Jsp page with Struts

    - by Cricandcric.com
    Hi, i am creating and Configuring the Struts for the first time, when I place the following Code in my jsp page <%@ taglib uri="http://struts.apache.org/tags-bean" prefix="bean"%> <%@ taglib uri="http://struts.apache.org/tags-html" prefix="html"%> <%@ taglib uri="http://struts.apache.org/tags-logic" prefix="logic"%> <%@ taglib uri="http://struts.apache.org/tags-tiles" prefix="tiles"%> I am getting the error when i move the mouse over the 1st Line "Duplicate Prefix "html" When I move the mouse over the 2nd Line, I am getting as "Duplicate Prefix "html" Similarly for 3rd and 4th Line, Can any one tell me why is this error all about Thanks in advance

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >