Search Results

Search found 45987 results on 1840 pages for 'copy files'.

Page 70/1840 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • "t-sql" tool for csv files?

    - by adolf garlic
    Is there a tool that allows to query files based on the following criteria, and change them? creation date filenames (e.g. filetypeA_YYYYMMDD_data2.blah) file content For the example, the workflow could be the following one: Get all files created between date x and date y, where the filename contains z and the file itself contains value x under column zzz. Update it to be a different value/another column.

    Read the article

  • Using a PowerShell Script to delete old files for SQL Server

    Many clients are using custom stored procedures or third party tools to backup databases in production environments instead of using database maintenance plans. One of the things that you need to do is to maintain the number of backup files that exist on disk, so you don't run out of disk space. There are several techniques for deleting old files, but in this tip I show how this can be done using PowerShell.

    Read the article

  • List and Read Text Files

    This sample takes files from a folder (in this situation, text files), and lists them in a listbox. Then, the user can click on a particular file, and display the text from the selected text file, in the textbox to the right of the listbox.

    Read the article

  • dun goofed the files in /usr/lib/x86_64-linux-gnu/

    - by tipu
    there was some weird package issue with (in my limited understanding) 32/64 bit libraries, so i went around making symlinks to the file my lamp installation expected to the ones that actually existed. i did this for a # of files in here: /usr/lib/x86_64-linux-gnu/ however php still ended up not working (separate issue) and now i believe i have a screwed up lib directory. is there a way to revert those library files?

    Read the article

  • Google Drive SDK: Sharing files and managing permissions

    Google Drive SDK: Sharing files and managing permissions During this session we'll explain how to use the Google Drive SDK to manage permissions and sharing settings of files. We'll go through the various permission types, roles and values and show to easily embed the Google Drive sharing dialog in your app. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • apt-get -f install removed software center and several other files

    - by user287858
    I ran sudo apt-get -f install and several files and programs were removed including software center. Is there a way to re-download everything as if ubuntu was new again without a cd? This computer does not have a cd-rom drive. I'd be fine with losing all the data on this computer. Also, when I run sudo apt-get install (almost anything) I get errors about dependencies and files not being available. Thanks to anyone who can help.

    Read the article

  • Check SQL Server Virtual Log Files Using PowerShell

    In a previous tip on Monitor Your SQL Server Virtual Log Files with Policy Based Management, we have seen how we can use Policy Based Management to monitor the number of virtual log files (VLFs) in our SQL Server databases. However, even with that most of the solutions I see online involve the creation of temporary tables and/or a combination of using cursors to get the total number of VLFs in a transaction log file. Is there a much easier solution?

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Orca: extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I looked up Orca (version 3.1), whose documentation states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines: orca -x theDirectory theModule.msm orca theModule.msm -x theDirectory ...and others. I get nowhere. The closest I've gotten was this: orca -q -x theDirectory -m theModule.msm ...but then it complains that I didn't specifiy a database to merge into. But I'm not trying to merge anything, no less into a database. I just want the files extracted. Can someone explain what I'm doing wrong with the command line options?

    Read the article

  • PHP count total files in directory AND subdirectory function

    - by Neoweiter
    I need to get a total count of JPG files within a specified directory, including ALL it's subdirectories. No sub-sub directories. Structure looks like this : dir1/ 2 files subdir 1/ 8 files total dir1 = 10 files dir2/ 5 files subdir 1/ 2 files subdir 2/ 8 files total dir2 = 15 files I have this function, which doesn't work fine as it only counts files in the last subdirectory, and total is 2x more than the actual amount of files. (will output 80 if I have 40 files in the last subdir) public function count_files($path) { global $file_count; $file_count = 0; $dir = opendir($path); if (!$dir) return -1; while ($file = readdir($dir)) : if ($file == '.' || $file == '..') continue; if (is_dir($path . $file)) : $file_count += $this->count_files($path . "/" . $file); else : $file_count++; endif; endwhile; closedir($dir); return $file_count; }

    Read the article

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • SVNKit: Commit files that were manually deleted from filesystem( Work Copy)

    - by Jam
    I can not solve the problem with collecting CommitItem(changes that commit), or more accurately, I have no porblem with the changed and added files BUT files that I manually deleted from the file system is not seen in CommitItem list ... And those changes can not commit to the SVN server. If I delete a file using the API, then the problem does not exist... but manually deleting ... Has anyone had a similar problem?

    Read the article

  • Different behavior of functors (copies, assignments) in VS2010 (compared with VS2005)

    - by Patrick
    When moving from VS2005 to VS2010 we noticed a performance decrease, which seemed to be caused by additional copies of a functor. The following code illustrates the problem. It is essential to have a map where the value itself is a set. On both the map and the set we defined a comparison functor (which is templated in the example). #include <iostream> #include <map> #include <set> class A { public: A(int i, char c) : m_i(i), m_c(c) { std::cout << "Construct object " << m_c << m_i << std::endl; } A(const A &a) : m_i(a.m_i), m_c(a.m_c) { std::cout << "Copy object " << m_c << m_i << std::endl; } ~A() { std::cout << "Destruct object " << m_c << m_i << std::endl; } void operator= (const A &a) { m_i = a.m_i; m_c = a.m_c; std::cout << "Assign object " << m_c << m_i << std::endl; } int m_i; char m_c; }; class B : public A { public: B(int i) : A(i, 'B') { } static const char s_c = 'B'; }; class C : public A { public: C(int i) : A(i, 'C') { } static const char s_c = 'C'; }; template <class X> class compareA { public: compareA() : m_i(999) { std::cout << "Construct functor " << X::s_c << m_i << std::endl; } compareA(const compareA &a) : m_i(a.m_i) { std::cout << "Copy functor " << X::s_c << m_i << std::endl; } ~compareA() { std::cout << "Destruct functor " << X::s_c << m_i << std::endl; } void operator= (const compareA &a) { m_i = a.m_i; std::cout << "Assign functor " << X::s_c << m_i << std::endl; } bool operator() (const X &x1, const X &x2) const { std::cout << "Comparing object " << x1.m_i << " with " << x2.m_i << std::endl; return x1.m_i < x2.m_i; } private: int m_i; }; typedef std::set<C, compareA<C> > SetTest; typedef std::map<B, SetTest, compareA<B> > MapTest; int main() { int i = 0; std::cout << "--- " << i++ << std::endl; MapTest mapTest; std::cout << "--- " << i++ << std::endl; SetTest &setTest = mapTest[0]; std::cout << "--- " << i++ << std::endl; } If I compile this code with VS2005 I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 If I compile this with VS2010, I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 The output for the first statement (constructing the map) is identical. The output for the second statement (creating the first element in the map and getting a reference to it), is much bigger in the VS2010 case: Copy constructor of functor: 10 times vs 8 times Assignment of functor: 2 times vs. 0 times Destructor of functor: 10 times vs 8 times My questions are: Why does the STL copy a functor? Isn't it enough to construct it once for every instantiation of the set? Why is the functor constructed more in the VS2010 case than in the VS2005 case? (didn't check VS2008) And why is it assigned two times in VS2010 and not in VS2005? Are there any tricks to avoid the copy of functors? I saw a similar question at http://stackoverflow.com/questions/2216041/prevent-unnecessary-copies-of-c-functor-objects but I'm not sure that's the same question. Thanks in advance, Patrick

    Read the article

  • Easy way to convert and copy a solution to VS 2010

    - by TheSean
    We have a VS2005 solution and I want to convert it to VS2010. I figured an easy way to keep the old solution around (for other developers) is to create new sln and proj files specific to 2010. I hoped that the conversion wizard would do this easily but it doesn't seem to. Anyone know an easy way to copy all .sln and .csproj files, then convert to VS2010?

    Read the article

  • Efficient copy of entire directory

    - by Hui Jin
    I want to copy one directory and the two files under it to another shared location of shared storage. Is it possible to combine the three(one directory and two files) as a continuous file writing and decompose it at another side to save the cost? I am limited to c language and Unix/Linux. I am considering to create a structure with the inode info and get the data at receiver. Thanks!

    Read the article

  • Download link for trial/evaluation copy of CA Siteminder

    - by velusbits
    Is there a trial/evaluation version available of CA Siteminder? What is CA SiteMinder? It is a centralized Internet access management system that enables user authentication and single sign-on, authentication management, policy-based authorization, identity federation and auditing of access to Web applications and portals. Where would I go for that link if one exists?

    Read the article

  • Download link for trial/evaluation copy of CA Siteminder

    - by velusbits
    Is there a trial/evaluation version available of CA Siteminder? What is CA SiteMinder? It is a centralized Internet access management system that enables user authentication and single sign-on, authentication management, policy-based authorization, identity federation and auditing of access to Web applications and portals. Where would I go for that link if one exists?

    Read the article

  • Copy Content from Sharepoint2007 to another Sharepoint2007

    - by Beuy
    Hi There, I have a Sharepoint 2007 web farm installation with two site collections, one is a blank site and the other is a migrated Sharepoint 2003 site collection. I want to move some specific content from the migrated 2003 collection to the 2007 collection, however I also want to change the path of where it exists, an example is that in the migrated 2003 collection HR is under Admin HR, in the 2007 collection I want HR to appear before Admin. I've looked around but haven't found a lot of information regarding how to move specific content between sites, any advice or help is greatly appreciated.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >