Search Results

Search found 28760 results on 1151 pages for 'search folder'.

Page 274/1151 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • How do I search & replace all occurrences of a string in a ms word doc with python?

    - by Mark
    Hello there, I am pretty stumped at the moment. Based on http://stackoverflow.com/questions/1045628/can-i-use-win32-com-to-replace-text-inside-a-word-document I was able to code a simple template system that generates word docs out of a template word doc (in Python). My problem is that text in "Text Fields" is not find that way. Even in Word itself there is no option to search everything - you actually have to choose between "Main Document" and "Text Fields". Being new to the Windows world I tried to browse the VBA docs for it but found no help (probably due to "text field" being a very common term). word.Documents.Open(f) wdFindContinue = 1 wdReplaceAll = 2 find_str = '\{\{(*)\}\}' find = word.Selection.Find find.Execute(find_str, False, False, True, False, False, \ True, wdFindContinue, False, False, False) while find.Found: t = word.Selection.Text.__str__() r = process_placeholder(t, answer_data, question_data) if type(r) == dict: errors.append(r) else: find.Execute(t, False, True, False, False, False, \ True, False, False, r, wdReplaceAll) This is the relevant portion of my code. I was able to get around all problems by myself by now (hint: if you want to replace strings with more than 256 chars, you have to do it via clipboard, etc ...) Hope, someone can help me.

    Read the article

  • How do you search a document for a string in c++?

    - by Jeff
    Here's my code so far: #include<iostream> #include<string> #include<fstream> using namespace std; int main() { int count = 0; string fileName; string keyWord; string word; cout << "Please make sure the document is in the same file as the program, thank you!" << endl << "Please input document name: " ; getline(cin, fileName); cout << endl; cout << "Please input the word you'd like to search for: " << endl; cin >> keyWord; cout << endl; ifstream infile(fileName.c_str()); while(infile.is_open()) { getline(cin,word); if(word == keyWord) { cout << word << endl; count++; } if(infile.eof()) { infile.close(); } } cout << count; } I'm not sure how to go to the next word, currently this infinite loops...any recommendation? Also...how do I tell it to print out the line that that word was on? Thanks in advance!

    Read the article

  • Python opening a file and putting list of names on separate lines

    - by Jeremy Borton
    I am trying to write a python program using Python 3 I have to open a text file and read a list of names, print the list, sort the list in alphabetical order and then finally re-print the list. There's a little more to it than that BUT the problem I am having is that I'm supposed to print the list of names with each name on a separate line Instead of printing each name on a separate line, it prints the list all on one line. How can I fix this? def main(): #create control loop keep_going = 'y' #Open name file name_file = open('names.txt', 'r') names = name_file.readlines() name_file.close() #Open outfile outfile = open('sorted_names.txt', 'w') index = 0 while index < len(names): names[index] = names[index].rstrip('\n') index += 1 #sort names print('original order:', names) names.sort() print('sorted order:', names) #write names to outfile for item in names: outfile.write(item + '\n') #close outfile outfile.close() #search names while keep_going == 'y' or keep_going == 'Y': search = input('Enter a name to search: ') if search in names: print(search, 'was found in the list.') keep_going = input('Would you like to do another search Y for yes: ') else: print(search, 'was not found.') keep_going = input('Would you like to do another search Y for yes: ') main()

    Read the article

  • media.set_xx giving me grief!

    - by Firas
    New guy here. I asked a while back about a sprite recolouring program that I was having difficulty with and got some great responses. Basically, I tried to write a program that would recolour pixels of all the pictures in a given folder from one given colour to another. I believe I have it down, but, now the program is telling me that I have an invalid value specified for the red component of my colour. (ValueError: Invalid red value specified.), even though it's only being changed from 64 to 56. Any help on the matter would be appreciated! (Here's the code, in case I messed up somewhere else; It's in Python): import os import media import sys def recolour(old, new, folder): old_list = old.split(' ') new_list = new.split(' ') folder_location = os.path.join('C:\', 'Users', 'Owner', 'Spriting', folder) for filename in os.listdir (folder): current_file = media.load_picture(folder_location + '\\' + filename) for pix in current_file: if (media.get_red(pix) == int(old_list[0])) and \ (media.get_green(pix) == int(old_list[1])) and \ (media.get_blue(pix) == int(old_list[2])): media.set_red(pix, new_list[0]) media.set_green(pix, new_list[1]) media.set_blue(pix, new_list[2]) media.save(pic) if name == 'main': while 1: old = str(raw_input('Please insert the original RGB component, separated by a single space: ')) if old == 'quit': sys.exit(0) new = str(raw_input('Please insert the new RGB component, separated by a single space: ')) if new == 'quit': sys.exit(0) folder = str(raw_input('Please insert the name of the folder you wish to modify: ')) if folder == 'quit': sys.exit(0) else: recolour(old, new, folder)

    Read the article

  • web spidering/crawling, can i do it or just search engines?

    - by bboyreason
    i already had a question answered about web-scraping with wget. but as i read a little more, i realize i may be looking for a web-crawling program. particularly the part about web-crawlers being able to get specific data like links or, in my case, products. all of the products on my site have the following naming convention, website.com/uniqueAlphaNumericID.html as far as i know, no dynamic content generation is being used and only one page per one item in the above format. should i just be thinking about: wget website.com | grep *.html or should i be looking into spiders/crawlers?

    Read the article

  • Very slow context menu in Windows 8

    - by burzum
    I've installed Windows 8 Pro on a blank new SSD, the system is on c:\. I do not think this problem was existent when I started using Windows 8 but I think it started to happen after I've symlinked (mklink /D) a folder from another drive, a SATA drive, to c:\xampp\htdocs. When I right click a file or folder inside the symlinked folder it always takes at least ~5-10 seconds until the context menu comes up. This also happens sometimes, but not all the time for files and folder outside of the symlinked folder. Also when I delete folders the delete folder dialog seems to get locked and does not continue. When I delete a folder using rmdir from the command line it works fine and pretty fast. It appears to me like the file explorer in Windows 8 is pretty bad compared to any other Windows I've used before? Any idea how to get these problems solved? I've already removed a lot of context menu entries; the only ones left are the tortoise git context menus but I'm sure that's not the problem.

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • How to exclude tags folder from triggering build in Teamcity?

    - by Jaya mareedu
    Hello, I recently installed Teamcity 5.0.3. I am trying to setup automated build for a .NET 2.0 VS2005 project. I use NAnt and MSBuild task to perform the build. The project structure is a typical SVN structure svn://localhost/ITools is my repository and the project structure is VisualTrack trunk branches tags I created a new project in Teamcity and then created a build configuration for that project. I asked it to kick off a build everytime there is a change detected in SVN VisualTrack VCS. I also configured it to create a label in VisualTrack/tags for every successful build. The problem I am running into is that the build is getting trigerred everytime teamcity is creating a new label under tags. I only want the build to be triggered if some developer commits his or her changes into trunk. Next step I took was to create a build trigger rule to exclude the tags path by specifying a trigger pattern as -:VisualTrack/tags/**, but looks like its not working. I believe the pattern I specified is not correct. Can someone please help me resolve this issue? Thanks, Jaya.

    Read the article

  • How to remove one folder from C:\Windows\winsxs?

    - by aF
    Hello, I've installed: Microsoft Visual C++ 2008 SP1 Redistributable Package (x86) and got the following folders: x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91 x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.30729.4926_none_508ed732bcbc0e5a I allready uninstalled the redistribute package but they continue overthere. I want to remove them because I want to test my program without installing nothing (I've included those dll's when building it in another computer). So, how can I remove those folders from C:\Windows\winsxs? Thanks in advance :D

    Read the article

  • How to "drag and drop" folders or multiple HTML files into a browser and have them open in multiple tabs

    - by PoorLuzer
    I save pages that I browse on the net and find interesting into a folder called C:\PageSaves Later, during the commute, I open these pages to see what they are and move them into a neatly categorized folder tree. For example, Perl related pages goto C:\Pages\Perl, MySQL related pages goto C:\Pages\MySQL and so on. I was wondering if there is any way I could open any number of HTML files on disc / inside a folder (C:\PageSaves in my case) into Mozilla/FF/K-Meleon etc For example, I would like to just "drag and drop" the folder C:\PageSaves into FireFox and have it open all the .html pages in the folder in a separate tab Right now, if I "drag and drop" multiple HTML files, it just opens the last file in the selection. Have a set of toolbar buttons, basically, a (the) plugin that should allow me to nuke the page (if I don't want to keep the page anymore) from disc or move the file (and its corresponding folder) into a predefined / new folder I am familiar with coding full blown FireFox plugins, so even if something very basic/almost similar exists, I can take it forward. Hints/clues/other methods of achieving the same result are all welcome!

    Read the article

  • SEO - Does google+other search engines index links within <noscript> tags?

    - by Joe
    I have setup some dropdown menus allowing users to find pages on my website by selecting options across multiple dropdowns: eg. Color of Car, Year This would generate a link like: mysite.xyz/blue/2010/ The only problem is, because this link is dynamically assembled with Javascript, I've also had to assemble each possible combination from the dropdowns into a list like: <noscript> No javascript enabled? Here are all the links: <a href='mysite.xyz/blue/2009/'>mysite.xyz/blue/2009/</a> <a href='mysite.xyz/blue/2010/'>mysite.xyz/blue/2010/</a> <a href='mysite.xyz/red/2009/'>mysite.xyz/red/2009/</a> <a href='mysite.xyz/red/2010/'>mysite.xyz/red/2010/</a> </noscript> My question is, if I put these in a tag like this, will I be penalized or anything by search engines such as Google? I've already been doing so for some navigational stuff which required offsets etc. However, now I would be listing a whole list of links here too. I want to provide them here, moreso so that google can actually index my pages - but for those without javascript, they can still navigate too. Your thoughts? Also.. even though I have some links that appear to have been indexed, I AM NOT 100% SURE, which is why I'm asking :P

    Read the article

  • I want to copy all the files available in my TFS source server to a folder in a directory.I tried th

    - by deep
    PS> C:\Windows\System32> Get-TfsItemProperty $/MyFirstTFSProj -r ` -server xyzc011b| Where {$_.CheckinDate -gt (Get-Date).AddDays(-150)} | Copy-Item D:\john\application1 -Destination C:\Test -whatif Copy-Item : The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its pr operties do not match any of the parameters that take pipeline input. At line:2 char:14 + Copy-Item <<<< D:\Deepu\SilverlightApplication5 -Destination C:\Test -w hatif

    Read the article

  • Is there a portable version of Spybot Search & Destroy?

    - by NoCatharsis
    I'm trying to make everything portable, as my office IT has cracked down on native app installation. One of my favorites (and a good CYA app in this case) is Spybot. All I can find are directions on how to alter the installation to make it portable. Is there a single download out there that would put the app right onto my USB drive?

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >