Search Results

Search found 4984 results on 200 pages for 'robots txt'.

Page 12/200 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Reading strings and integers from .txt file and printing output as strings only

    - by screename71
    Hello, I'm new to C++, and I'm trying to write a short C++ program that reads lines of text from a file, with each line containing one integer key and one alphanumeric string value (no embedded whitespace). The number of lines is not known in advance, (i.e., keep reading lines until end of file is reached). The program needs to use the 'std::map' data structure to store integers and strings read from input (and to associate integers with strings). The program then needs to output string values (but not integer values) to standard output, 1 per line, sorted by integer key values (smallest to largest). So, for example, suppose I have a text file called "data.txt" which contains the following three lines: 10 dog -50 horse 0 cat -12 zebra 14 walrus The output should then be: horse zebra cat dog walrus I've pasted below the progress I've made so far on my C++ program: #include <fstream> #include <iostream> #include <map> using namespace std; using std::map; int main () { string name; signed int value; ifstream myfile ("data.txt"); while (! myfile.eof() ) { getline(myfile,name,'\n'); myfile >> value >> name; cout << name << endl; } return 0; myfile.close(); } Unfortunately, this produces the following incorrect output: horse cat zebra walrus If anyone has any tips, hints, suggestions, etc. on changes and revisions I need to make to the program to get it to work as needed, can you please let me know? Thanks!

    Read the article

  • save input from text file to array

    - by Jessy
    How to save txt file content to different arrays? my txt file content is like this; 12 14 16 18 13 17 14 18 10 23 pic1 pic2 pic3 pic4 pic5 pic6 pic7 pic8 pic9 pic10 left right top left right right top top left right 100 200 300 400 500 600 700 800 900 1000 how can I save each line into different array? e.g. line 1 will be saved in an array1 line 2 will be saved in an array2 line 3 will be saved in an array3 line 4 will be saved in an array4 Thanks you

    Read the article

  • emulate ENTER in .txt

    - by gary
    Can someone please help in adding a command for "ENTER" in a .txt file to emulate "ENTER". Example; 12345 "enter" 548793 "ENTER" ..... where an entry will be a number followed by enter to next field where the next number will be inserted etc.. so it will look like this: 12345 548793 etc...

    Read the article

  • Export list as .txt (Python)

    - by Nimbuz
    My Python module has a list that contains all the data I want to save as a .txt file somewhere. The list contains several tuples like so: list = [ ('one', 'two', 'three'), ('four', 'five', 'six')] How do I print the list so each tuple item is separated by a tab and each tuple is separated by a newline? Thanks

    Read the article

  • SEO Help with Pages Indexed by Google

    - by Joe Majewski
    I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all. Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3 I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now. Here are the precautions that I have taken: My robots.txt file has a line like this: Disallow: /wow/profile.php* When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution. In the source code I included the following meta data: <meta name="robots" content="noindex,follow" /> I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results. This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it. This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them. Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google. Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google. Thanks!

    Read the article

  • How to convert .doc or .docx files to .txt

    - by styx777
    I'm wondering how you can convert Word .doc/.docx files to text files through Java. I understand that there's an option where I can do this through Word itself but I would like to be able to do something like this: java DocConvert somedocfile.doc converted.txt Thanks.

    Read the article

  • Opening txt files in Internet Explorer 8 parses some html

    - by Rob
    I'm not sure if it's just me, but whenever I open .txt files in internet explorer, it always parses the HTML, so forms, buttons, fields all show up. It does this on multiple computers, and I'm fairly sure it hasn't always done this. I know FireFox doesn't, FireFox loads it as a text file. Does anyone else have this problem? If so, have you solved it? If so again, how?

    Read the article

  • Can I tell sitecrawlers to visit a certain page?

    - by Ace
    Hi there! I have this drupal website that revolves around a document database. By design you can only find these documents by searching the site. But I want all the results to be indexed by Googlebot and other crawlers, so I was thinking, what if I make a page that lists all the documents, and then tell the robots to visit the page to index all my documents..? Is this possible, or is there a better way to do it?

    Read the article

  • Replace a whole line in a txt file

    - by user302935
    I'am new to Python 3 and could really use a little help. I have a txt file containing: InstallPrompt= DisplayLicense= FinishMessage= TargetName=D:\somewhere FriendlyName=something I have a python script that in the end, should change just two lines to: TargetName=D:\new FriendlyName=Big Could anyone help me, please? I have tried to search for it, but I didnt find something I could use. The text that should be replaced could have different length.

    Read the article

  • asp:TextBox write date in txt field

    - by senzacionale
    <asp:TextBox AutoPostBack="true" ID="txtDate" OnTextChanged="txtDate_TextChanged" runat="server" Value="<%= DateTime.Today.ToShortDateString() %>"></asp:TextBox> Value="<%= DateTime.Today.ToShortDateString() %" does not write date in txt field but whole string. What i am doing wrong?

    Read the article

  • How can I allow robots access to my sitemap, but prevent casual users from accessing it?

    - by morpheous
    I am storing my sitemaps in my web folder. I want web crawlers (Googlebot etc) to be able to access the file, but I dont necessarily want all and sundry to have access to it. For example, this site (superuser.com), has a site index - as specified by its robots.txt file (http://superuser.com/robots.txt). However, when you type http://superuser.com/sitemap.xml, you are directed to a 404 page. How can I implement the same thing on my website? I am running a LAMP website, also I am using a sitemap index file (so I have multiple site maps for the site). I would like to use the same mechanism to make them unavailable via a browser, as described above.

    Read the article

  • Ubuntu only boots with USB plugged in

    - by Ben
    I'm new to the Linux world so please bear with me! :-) I installed Ubuntu via USB drive onto my hard drive. If I boot the PC without the usb drive I used, Ubuntu will not load. After booting I can unplug without any consequences. I looked on the hard drive and there is a boot folder. On the USB drive, this is the tree contents: /media/disk$ tree . |-- adtext.cfg |-- boot.cat |-- f10.txt |-- f1.txt |-- f2.txt |-- f3.txt |-- f4.txt |-- f5.txt |-- f6.txt |-- f7.txt |-- f8.txt |-- f9.txt |-- initrd.gz |-- isolinux.bin |-- isolinux.cfg |-- ldlinux.sys |-- linux |-- menu.c32 |-- menu.cfg |-- po4a.cfg |-- prompt.cfg |-- splash.png |-- stdmenu.cfg |-- syslinux.cfg |-- text.cfg |-- ubnfilel.txt |-- ubnpathl.txt `-- vesamenu.c32 Am I correct in my assumption that the boot aspect is associated to the USB drive? If so, how do I get it to boot without the USB? I'm guessing copying into some location and modifying grub?

    Read the article

  • Vim: Show the index of tabs in the tabline

    - by bitmask
    Lets say I opened file1.txt, file2.txt, file3a.txt and file3b.txt such that the tabline (the thing on the top) looks like this: file1.txt file2.txt 2 file3a.txt (Note how file3b.txt. is missing because it is shown in a split, in the same tab as file3a.txt) To move more quickly between tabs (with <Number>gt), I would like each tab to display its index, along the filename. Like so: 1:<file1.txt> 2:<file2.txt> 3:<2 file3a.txt> The formatting (the angle braces in particular) are optional; I just want the index to appear there (the 1:, 2: and so on). No clues on :h tab-page-commands or google whatsoever.

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • how to rewrite or redirect old or missing or invalid url to 404 page

    - by kath
    I recently upgraded a site and almost all URLs have changed. I have redirected all of them (or so I hope) but it may be possible that some of them have slipped by me. Is there a way to somehow catch all invalid URLs and send the user to a certain page I am using PHP Thanks so much! error file is already in .htaccess but seems nothing going to change you can see the error file as below AddHandler application/x-httpd-php5s .php ErrorDocument 404 /content/404.php <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / here are 2 different url one the first one is old one which i edited and the secound one is edited one #1 old one (which is no longer on the server) http://adsbuz.com/vehicles-cars/toyoya/2009-toyota-land-cruiser-gxr-4686.htm #2 the editet one which is on the server http://adsbuz.com/vehicles-cars-for-sale/toyoya/2009-toyota-land-cruiser-gxr-4686.htm i need only the secound one with the vehicles-cars-for-sale because the other directory is already modified and its not on the server but as you can see after the (adsbuz) site name vehicles-cars and vehicles-cars-for-sale both are opening for same location I hope I made myself clear

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • save as .txt format

    - by user1180492
    I made a NotePad program. The problem is it doesn't save in .txt format, It save as a file with no format. But it can open .txt files. How can i fix it? Here is my work. import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.util.Scanner; import java.io.*; public class NotePad extends JFrame { private JTextArea noteArea; public static void main(String[] args) { NotePad p = new NotePad(); p.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); p.setSize(500,300); p.setVisible(true); } public NotePad() { super("Java Notepad"); setLayout(new BorderLayout()); noteArea = new JTextArea("",20,20); noteArea.setWrapStyleWord(true); noteArea.setLineWrap(true); Font font = new Font("sanserif", Font.BOLD,14); noteArea.setFont(font); JScrollPane scroller = new JScrollPane(noteArea); scroller.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_ALWAYS); scroller.setHorizontalScrollBarPolicy(ScrollPaneConstants.HORIZONTAL_SCROLLBAR_NEVER); add(scroller,BorderLayout.CENTER); JMenuBar menuBar = new JMenuBar(); JMenu fileMenu = new JMenu("File"); JMenuItem openMenu = new JMenuItem("Open"); openMenu.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { JFileChooser openFile = new JFileChooser(); openFile.showOpenDialog(new NotePad()); loadFile(openFile.getSelectedFile()); } }); JMenuItem saveMenu = new JMenuItem("Save"); saveMenu.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { JFileChooser saveFile = new JFileChooser(); saveFile.showSaveDialog(new NotePad()); fileSaved(saveFile.getSelectedFile()); } }); JMenuItem exitMenu = new JMenuItem("Close"); exitMenu.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { System.exit(0); } }); fileMenu.add(openMenu); fileMenu.add(saveMenu); fileMenu.add(exitMenu); menuBar.add(fileMenu); this.setJMenuBar(menuBar); } public void loadFile(File file) { noteArea.setText(""); try { BufferedReader read = new BufferedReader(new FileReader(file)); String line = null; while((line =read.readLine())!=null) { noteArea.append(line +"\n"); } read.close(); } catch (Exception e) { System.out.println("Error " + e.toString()); } } public void fileSaved(File file) { try { PrintWriter writer = new PrintWriter(file); String[] lines = noteArea.getText().split("\\n"); for (String ) { writer.println(words); } writer.close(); } catch (Exception e) { System.out.println("Error " + e.toString()); } } } btw I can't post my question because of not explaning the scenario according to the site. So there. Thanks for the help

    Read the article

  • Android - Help with ANR and traces.txt

    - by Tori
    My app crashes with an ANR while scrolling in a spinner. I implemented many spinners in different apps and this is the first time i get this ANR. I would appreciate any help in deciphering the traces.txt DALVIK THREADS: "main" prio=5 tid=3 NATIVE | group="main" sCount=1 dsCount=0 s=0 obj=0x40018e70 | sysTid=896 nice=0 sched=0/0 handle=-1097417572 at android.os.BinderProxy.transact(Native Method) at android.app.ActivityManagerProxy.handleApplicationError(ActivityManagerNative.java:2103) at com.android.internal.os.RuntimeInit.crash(RuntimeInit.java:302) at com.android.internal.os.RuntimeInit$UncaughtHandler.uncaughtException(RuntimeInit.java:75) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:887) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:884) at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Ping script with loop and save in a txt

    - by matthias
    Hello, i try to make an Ping script with vbs. I need a Script, that ping (no ping limit, the program will run all the time) a computername in the network every 2 seconds and save the results in a txt file. For Example: 06/08/2010 - 13:53:22 | The Computer "..." is online 06/08/2010 - 13:53:24 | The Computer "..." is offline Now i try a little bit: strComputer = "TestPC" Set objPing = GetObject("winmgmts:{impersonationLevel=impersonate}")._ ExecQuery("select * from Win32_PingStatus where address = '"_ & strComputer & "'") For Each objStatus in objPing If IsNull(objStatus.StatusCode) Or objStatus.StatusCode <> 0 Then .......... Next And than i don't know how to make it. (I'm new with vbs :-)) I hope some one can help me. Greeting, matthias

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >