Search Results

Search found 4984 results on 200 pages for 'robots txt'.

Page 21/200 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How to group files by date using PowerShell?

    - by Shane Cusson
    I have a folder with 2000+ files. I want to count the files by date. so with: Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 2010-03-15 12:54 AM 10364953 file1.txt -a--- 2010-03-15 1:07 AM 10650503 file2.txt -a--- 2010-03-16 1:20 AM 10118657 file3.txt -a--- 2010-03-16 1:33 AM 9735542 file4.txt -a--- 2010-03-18 1:46 AM 10666979 file5.txt I'd like to see: Date Count ---------- ------ 2010-03-15 2 2010-03-16 2 2010-03-18 1 Thanks!

    Read the article

  • Using a dictionary file with sed

    - by Winston
    I have a blacklist.txt file that contains keywords I want to remove using sed. Here's what the blacklist.txt file contain winston@linux ] $ cat blacklist.txt obscure keywords here ... And here's what I have so far, but currently doesn't work. blacklist=$(cat blacklist.txt); output="filtered_file.txt" for i in $blacklist; do cat $input | sed 's/$i//g' >> $output done

    Read the article

  • Strange git case...

    - by khelll
    I have a file, let's say file.txt I have done git mv file.txt to file1.txt, then I created a new file called file.txt and worked on it, unfortunatly I didn't add that file to git yet. Anyway the problem is that I did git stash, then git stash apply, but the new file.txt disappeared... anyway to get it back?

    Read the article

  • How to combine ASCII text files, then encrypt, then decrypt, and put into a 'File' Class? C++

    - by Joel
    For example, if I have three ASCII files: file1.txt file2.txt file3.txt ...and I wanted to combine them into one encrypted file: database.txt Then in the application I would decrypt the database.txt and put each of the original files into a 'File' class on the heap: class File{ public: string getContents(); void setContents(string data); private: string m_data; }; Is there some way to do this? Thanks

    Read the article

  • File Output in Powershell without Extension

    - by CaptHowdy
    Here is what I have so far: Get-ChildItem "C:\Folder" | Foreach-Object {$_.Name} > C:\Folder\File.txt When you open the output from above, File.txt, you see this: file1.txt file2.mpg file3.avi file4.txt How do I get the output so it drops the extension and only shows this: file1 file2 file3 file4 Thanks in advance! EDIT Figured it out with the help of the fellows below me. I ended up using: Get-ChildItem "C:\Folder" | Foreach-Object {$_.BaseName} > C:\Folder\File.txt

    Read the article

  • Parse contents of directory to bash command in a script

    - by ECII
    I have the directory ~/fooscripts/ and inside there are foo1.txt, foo2.txt, etc etc I have a command that takes the file foo1.txt as input and does some calculation. The output location etc is handled internally in fooprog fooprog -user-data=foo1.txt I would like to automate the whole thing in a bash script so that the script will parse all txt files in ~/fooscripts/ sequentially. I am a newbie in bash. Could anyone give me a hint?

    Read the article

  • Tagging the differencies in both files

    - by Hakim
    I know that comparing 2 files is a typical problem and there are many discussions on this problem. but I have a rather different problem while working with text files: I have two text files which may differ in number of lines. now I want to compare two files and find the lines which differ. after that I want to tag all the differencies in both of files. for example here are the content of my files: File1.txt: This is the first line. This line is just appeared in File1.txt. you can see this line in both files. this line is also appeared in both files. this line and, this one are mereged in File2.txt. File2.txt: This is the first line. you can see this line in both files. this line is also appeared in both files. this line and, this one are mereged in File2.txt. After processing I want both files to be like this: File1.txt: This is the first line. <Diff>This line is just appeared in File1.txt.</Diff> you can see this line in both files. this line is also appeared in both files. <Diff>this line and,</Diff> <Diff>this one are merged in File2.txt.</Diff> File1.txt: This is the first line. <Diff></Diff> you can see this line in both files. this line is also appeared in both files. <Diff>this line and, this one are mereged in File2.txt.</Diff> <Diff></Diff> How can I do this? I know that some tools such as diff could help me, but how can I convert their results in this format? Thank you in advance.

    Read the article

  • Regular expression to match text that doesn't start with substring?

    - by Steven
    I have text with file names scattered throughout. The filenames appear in the text like this: |test.txt| |usr01.txt| |usr02.txt| |foo.txt| I want to match the filenames that don't start with usr. I came up with (?<=\|).*\.txt(?=\|) to match the filenames, but it doesn't exclude the ones starting with usr. Is this possible with regular expressions?

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • fatal error C1075: end of file found before the left brace and read and write files not working

    - by user320950
    could someone also tell me if the actions i am attempting to do which are stated in the comments are correct or not. i am new at c++ and i think its correct but i have doubts #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt float price=' ',curr_total=0.0; int wrong=0; int itemnum=' '; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items if( in_stream.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here0, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n" << endl; exit(1); } else{ cout << " System ran correctly " << endl; out_stream1.open("listWititems.txt", ios::out); // list of avaliable items if(out_stream1.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here1, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit(1); } else{ cout << " System ran correctly " << endl; } in_stream2.open("PRICELIST.txt", ios::in); if( in_stream2.fail() ) { wrong++; cout << " the error occured here2, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream3.open("listWitdollars.txt", ios::out); if(out_stream3.fail() ) { wrong++; cout << " the error occured here3, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream4.open("display.txt", ios::in); if( in_stream4.fail() ) { wrong++; cout << " the error occured here4, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream5.open("showitems.txt", ios::out); if( out_stream5.fail() ) { wrong++; cout << " the error occured here5, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); in_stream.setf(ios::fixed); while(in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (in_stream2 >> itemnum >> price) // gets itemnum and price { while (in_stream2.eof()) // reads file to end of file { in_stream2 >> itemnum; in_stream2 >> price; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; } } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (out_stream3 << itemnum << price) { while (out_stream3.eof()) // reads file to end of file { out_stream3 << itemnum; out_stream3 << price; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; } return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while ( in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; }

    Read the article

  • Create Folders from text file and place dummy file in them using a CustomAction

    - by Birkoff
    I want my msi installer to generate a set of folders in a particular location and put a dummy file in each directory. Currently I have the following CustomActions: <CustomAction Id="SMC_SetPathToCmd" Property="Cmd" Value="[SystemFolder]cmd.exe"/> <CustomAction Id="SMC_GenerateMovieFolders" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (echo %a)" /> <CustomAction Id="SMC_CopyDummyMedia" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (copy [MBSAMPLECOLLECTIONS]dummy.avi &quot;%a&quot;\&quot;%a&quot;.avi)" /> These are called in the InstallExecuteSequence: <Custom Action="SMC_SetPathToCmd" After="InstallFinalize"/> <Custom Action="SMC_GenerateMovieFolders" After="SMC_SetPathToCmd"/> <Custom Action="SMC_CopyDummyMedia" After="SMC_GenerateMovieFolders"/> The custom actions seem to start, but only a blank command prompt window is shown and the directories are not generated. The files needed for the customaction are copied to the correct directory: <Directory Id="WIX_DIR_COMMON_VIDEO"> <Directory Id="MBSAMPLECOLLECTIONS" Name="MB Sample Collections" /> </Directory> <DirectoryRef Id="MBSAMPLECOLLECTIONS"> <Component Id="SampleCollections" Guid="C481566D-4CA8-4b10-B08D-EF29ACDC10B5" DiskId="1"> <File Id="movies.txt" Name="movies.txt" Source="SampleCollections\movies.txt" Checksum="no" /> <File Id="series.txt" Name="series.txt" Source="SampleCollections\series.txt" Checksum="no" /> <File Id="dummy.avi" Name="dummy.avi" Source="SampleCollections\dummy.avi" Checksum="no" /> </Component> </DirectoryRef> What's wrong with these Custom Actions or is there a simpler way to do this?

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • Using diff and patch to force one local code base to look like another

    - by Dave Aaron Smith
    I've noticed this strange behavior of diff and patch when I've used them to force one code base to be identical to another. Let's say I want to update update_me to look identical to leave_unchanged. I go to update_me. I run a diff from leave_unchanged to update_me. Then I patch the diff into update_me. If there are new files in leave_unchanged, patch asks me if my patch was reversed! If I answer yes, it deletes the new files in leave_unchanged. Then, if I simply re-run the patch, it correctly patches update_me. Why does patch try to modify both leave_unchanged and update_me? What's the proper way to do this? I found a hacky way which is to replace all +++ lines with nonsense paths so patch can't find leave_unchanged. Then it works fine. It's such an ugly solution though. $ mkdir copyfrom $ mkdir copyto $ echo "Hello world" > copyfrom/myFile.txt $ cd copyto $ diff -Naur . ../copyfrom > my.diff $ less my.diff diff -Naur ./myFile.txt ../copyfrom/myFile.txt --- ./myFile.txt 1969-12-31 19:00:00.000000000 -0500 +++ ../copyfrom/myFile.txt 2010-03-15 17:21:22.000000000 -0400 @@ -0,0 +1 @@ +Hello world $ patch -p0 < my.diff The next patch would create the file ../copyfrom/myFile.txt, which already exists! Assume -R? [n] yes patching file ../copyfrom/myFile.txt $ patch -p0 < my.diff patching file ./myFile.txt

    Read the article

  • japanese input display outside of TextField created into ScrollPane

    - by soetheingilynn
    hello... I'm new to ActionScript 2.0. plz kindly help me. I have created the MainMovieClip and Scrollbar as follow .... my problem is that when I input japanese characters,the characters display at the top corner of the swf until I confirm the input. how can I do it?? if I install FlashPlayer "flashplayer10_1_rc2_plugin_041910", then the japanese characters display in the textfield normally....why is that??? plz..help me. with flash player 10.0, I can't input the japanese characters in the textfield. var mcMain:MovieClip = this.createEmptyMovieClip("mcMain", this.getNextHighestDepth()); scrp.contentPath = "scrollMovieClip"; mcMain = scrp.content; var textholder:TextField = mcMain.createTextField("txt", mcMain.getNextHighestDepth(), 50, 50, 100, 50); mcMain.txt.setFocus(); mcMain.txt.type = "input"; mcMain.txt.wordWrap = true; mcMain.txt.multiline = true; mcMain.txt.background = true; mcMain.txt.border = true; mcMain.txt.selectable = true; thanks in advanced...anyone.

    Read the article

  • How to read.table with "Hebrew" column names (in R)?

    - by Tal Galili
    Hi all, I am trying to read a .txt file, with Hebrew column names, but without success. I uploaded an example file to: http://www.talgalili.com/files/aa.txt And am trying the command: read.table("http://www.talgalili.com/files/aa.txt", header = T, sep = "\t") This returns me with: X.....ª X...ª...... X...œ.... 1 12 97 6 2 123 354 44 3 6 1 3 Instead of: ??? ????? ???? 12 97 6 123 354 44 6 1 3 My output for: l10n_info() Is: $MBCS [1] FALSE $`UTF-8` [1] FALSE $`Latin-1` [1] TRUE $codepage [1] 1252 And for: Sys.getlocale() Is: [1] "LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_MONETARY=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252" Can you suggest to me what to try and change to allow me to load the file correctly ? Update: Trying to use: read.table("http://www.talgalili.com/files/aa.txt",fileEncoding ="iso8859-8") Has resulted in: V1 1 ? Warning messages: 1: In read.table("http://www.talgalili.com/files/aa.txt", fileEncoding = "iso8859-8") : invalid input found on input connection 'http://www.talgalili.com/files/aa.txt' 2: In read.table("http://www.talgalili.com/files/aa.txt", fileEncoding = "iso8859-8") : incomplete final line found by readTableHeader on 'http://www.talgalili.com/files/aa.txt' While also trying this: Sys.setlocale("LC_ALL", "en_US.UTF-8") Or this: Sys.setlocale("LC_ALL", "en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8") Get's me this: [1] "" Warning message: In Sys.setlocale("LC_ALL", "en_US.UTF-8") : OS reports request to set locale to "en_US.UTF-8" cannot be honored Any suggestion or clarification will be appreciated. Best, Tal

    Read the article

  • rpm file conflict after alien conversion

    - by Zitrax
    I have a program for which I generate a .deb file. The .deb file works fine on the systems I have tried it on (also tested with lintian). Previously it has worked to use alien to convert this to .rpm and install it on Suse. However it is now about a year since I tried it the last time and now I get an error when trying to install the alien made rpm on Fedora 11, I get this error: file /usr/share/icons/default.kde from install of testpkg-0.2-2.i386 conflicts with file from package kdelibs3-3.5.10-13.fc11.1.i586 Listing the content of the rpm file: $ rpm -qlp testpkg-0.2-2.i386.rpm / /usr /usr/games /usr/games/testpkg /usr/lib /usr/lib/libfmod-3.75.so /usr/share /usr/share/app-install /usr/share/app-install/icons /usr/share/app-install/icons/testpkg.png /usr/share/applications /usr/share/applications/testpkg.desktop /usr/share/doc /usr/share/doc/testpkg /usr/share/doc/testpkg/changelog.gz /usr/share/doc/testpkg/copyright /usr/share/games /usr/share/games/testpkg /usr/share/games/testpkg/images /usr/share/games/testpkg/images/bb.dat /usr/share/games/testpkg/images/bb_bg.dat /usr/share/games/testpkg/images/bubblemad_8x8.png /usr/share/games/testpkg/images/goldfont.png /usr/share/games/testpkg/lvl /usr/share/games/testpkg/lvl/lvl001.txt /usr/share/games/testpkg/lvl/lvl002.txt /usr/share/games/testpkg/lvl/lvl003.txt /usr/share/games/testpkg/lvl/lvl004.txt /usr/share/games/testpkg/lvl/lvl005.txt /usr/share/games/testpkg/lvl/lvl006.txt /usr/share/games/testpkg/lvl/lvl007.txt /usr/share/games/testpkg/music /usr/share/games/testpkg/music/alfa.it /usr/share/games/testpkg/music/beta.it /usr/share/games/testpkg/sounds /usr/share/games/testpkg/sounds/bounce.wav /usr/share/games/testpkg/sounds/click.wav /usr/share/games/testpkg/sounds/warning.wav /usr/share/icons /usr/share/icons/default.kde /usr/share/icons/default.kde/16x16 /usr/share/icons/default.kde/16x16/apps /usr/share/icons/default.kde/16x16/apps/testpkg.png /usr/share/man /usr/share/man/man6 /usr/share/man/man6/testpkg.6.gz Am I wrong in putting the kde icons in /usr/share/icons/default.kde which seem to be a symbolic link ? It's a symbolic link on both Kubuntu 9.10 and Fedora 11 though. Sounds like a common situation that the same directory is needed for different packages, so why is it a conflict ?

    Read the article

  • Unzip individual files from multiple zip files and extract those individual files to home directory

    - by James P.
    I would like to unzip individual files. These files have a .txt extension. These files also live within multiple zipped files. Here is the command I'm trying to use. unzip -jn /path/to/zipped/files/zipArchiveFile2011\*.zip /path/to/specific/individual/files/myfiles2011*.txt -d /path/to/home/directory/for/extract/ From my understanding, the -j option excludes directories and will extract only the txt files The -n option will not overwrite a file if it has already been extracted. I've also learned that the forward slash in /path/to/zipped/files/zipArchiveFile2011\*.zip is necessary to escape the wildcard (*) character. Here is sample of error messages I'm coming accross: Archive: /path/to/zipped/files/zipArchiveFile20110808.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110807.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110808.txt Archive: /path/to/zipped/files/zipArchiveFile20110809.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110810.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110809.txt I feel that I'm missing something very simple. I've tried using single quotes (') and double quotes (") around directory paths. But no luck.

    Read the article

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • Batch file reads in text file and replaces quotes with variables into new file?

    - by John
    I have customer Pizza lists that i have saved into 5 seperate txt files from my database in this format: Filename = 25Percent.TXT "555-1211" "555-1212" "555-1223" ... ect Each list is a phone number in quotes and each list varies in length. Part 1: I have two sets of variables that i would like to replace with each quote in the 5 text files. The two variables would be like: Var A = < Discount Pizza price for phone number is " Var B = " 25 % So i would like to run a batch file that reads each line in the text file and writes into another text file the following, replacing the quotes with the variables: New Filename = 25Percentfinished.TXT < Discount Pizza price for phone number is "555-1211" 25 % < Discount Pizza price for phone number is "555-1212" 25 % < Discount Pizza price for phone number is "555-1223" 25 % Then I would repeat for 30percent.txt, then 35percent.txt, 40percent.txt, and then finally 50percent.txt file. Part II: I would also like to append the 5 new files together, with the append command? I am assuming that the SET Var command would also be used? Not sure what to do here.

    Read the article

  • Windows Server wbadmin recover with commas

    - by dlp
    I want to do a recovery of files with commas in their names from the command line, ala: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File, With Comma.txt,C:\Path\To\File 2, With Comma.txt" So there are two files: C:\Path\To\File, With Comma.txt C:\Path\To\File 2, With Comma.txt The problem is wbadmin assumes commas separates each file, so it sees 4 files specified instead of 2. I've tried putting a \ in front of commas that are part of the file names like so: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File\, With Comma.txt,C:\Path\To\File 2\, With Comma.txt" but it doesn't work, it just says there's a syntax error. The documentation on Technet doesn't seem to mention anything that'll help either. OS is Windows Server 2008 R2. A clarifying comment: I've changed the file names to be different than the actual names to be less revealing, but I also see I dumbed it down too much. The comma can occur either in the file name itself like C:\Path\To\File, With Comma.txt' or in the path to the file, like:C:\Path, To\Other\File.txt`.

    Read the article

  • Escaping %’s in file-/folder-names at the command-line

    - by Synetech
    Does anybody of a way to access files and directories that have a % in their name (which is valid) from the command-line? Specifically, if there are two %’s and the text between them happens to correspond to an environment variable. For example, if there is a file called C:\blah\%temp%.txt or a folder called C:\Program Files\%temp%\, none of the following will work because the variable gets expanded: > dir "c:\blah\%temp%.txt" > dir "c:\blah\^%temp^%.txt" > dir "c:\blah\%%temp%%.txt" > dir "c:\blah\\%temp\%.txt" > dir "c:\program files\%temp%" > dir "c:\program files\^%temp^%" > dir "c:\program files\%%temp%%" > dir "c:\program files\\%temp\%" Using wildcards will work, but does not uniquely select the file/folder and may include others: > dir "c:\blah\?temp?.txt"        (also shows ztempz.temp, 1tempa.txt, etc.) > dir "c:\program files\?temp?"   (likewise) (This is frustrating because every now and then—usually when Explorer is restarted for whatever reason—the environment variables stop expanding and some places where they are used end up creating files or directories with the environment variable in it. For example, because I configured Chromium to store its cache in a subdirectory of %temp%, if the variable expands, it is fine, but when it doesn’t, Chromium creates a directory called %temp% under its own directory and stores the cache—which can get large—there. I want to add a line to my temp-/junk-file cleaning script to automatically delete that folder if it exists, but I cannot figure out how to access it from the command-line without resorting to wildcards.)

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >