Search Results

Search found 4783 results on 192 pages for 'a txt'.

Page 4/192 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Robots.txt help

    - by Kyle R
    Google have just thrown up thousands of errors for duplicate content on my link tracker I am using. I want to make it so Google & any other search engines do not visit my pages on the link tracker. I want these pages to disallow these robots, my pages are: http://www.site.com/page1.html http://www.site.com/page2.html How would I write my robots.txt to make all robots not visit these links when they are in my page?

    Read the article

  • How to compare two TXT files before send it to SQL

    - by adopilot
    I have to handle TXT dat files which coming from one embed device, My problem is in that device always sending all captured data but I want to take only difrences between two sending and do calculation on them. After calculation I send it to SQL using bulkinsert function. I want to extract data which is different according to first file I got from device. Lats say that device first time device send data like this in some.dat (ASCII) file 0000199991 0000199321 0000132913 0000232318 0000312898 On second calls to get data from device it is going to return all again (previous and next captured records) something like this 0000199991 0000199321 0000132913 0000232318 0000312898 9992129990 8782999022 2323423456 But this time I do want only to calculate and pass trough data added after first insert. I am trying to make Win Forms app using C# and Visual Studio 2008

    Read the article

  • PHP robots.txt parsing

    - by omfgroflmao
    Is there an easiest way to do this? function parse_robots_txt($URL){ $parsed = parse_url($URL); $robots = file_get_contents('http://'.$parsed['host'].'/robots.txt',FILE_TEXT); $exploded = explode('user-agent:',strtolower($robots)); foreach($exploded as $user_agent){ $user_agent = trim($user_agent); if(substr($user_agent,0,1) == '*'){ $user_agent = str_replace('#','',preg_replace('/#.*\\n/i','',$user_agent)); $user_agent = str_replace('disallow:','',substr($user_agent,1)); $user_agent = preg_replace('/allow:/i', '+-+-+-+', $user_agent, 1); $user_agent = str_replace('allow:','',$user_agent); print_r(explode('+-+-+-+',$user_agent)); } } }

    Read the article

  • Java BufferedReader behavior in CSV vs TXT file

    - by Gabriel
    If i try to read a CSV file called csv_file.csv. The problem is that when i read lines with BufferedReader.readLine() it skips the first line with months. But when i rename the file to csv_file.txt it reads it allright and it's not skipping the first line. Is there an undocumented "feature" of BufferedReader that i'm not aware? Example of file: Months, SEP2010, OCT2010, NOV2010 col1, col2, col3, col4, col5 aaa,,sdf,"12,456",bla bla bla, xsaffadfafda and so on, and so on, "10,00", xxx, xxx The code: FileInputStream stream = new FileInputStream(UploadSupport.TEMPORARY_FILES_PATH+fileName); BufferedReader br = new BufferedReader(new InputStreamReader(stream, "UTF-8")); String line = br.readLine(); String months[] = line.split(","); while ((line=br.readLine())!=null) { /*parse other lines*/ }

    Read the article

  • Retrieving content of a txt file from URL to Android

    - by eightx2
    I would like to be able to retrieve contents of a .txt file from the internet, and load it in an EditText. I tried using the code on this page: Reading Text File From Server on Android It didn't work, as you might have guessed. I've read on numerous sites about this type of problem, but I can't get anything to work. Someone suggested AndroidHttpClient (http://developer.android.com/reference/android/net/http/AndroidHttpClient.html), but I simply can't find any examples with this. As I'm a newbie in Android programming, I would love if someone could please give me a small example.

    Read the article

  • How can I use varnish to generate a robots.txt file even for subdomain of the same site?

    - by Sam
    I want to generate a robots.txt file using Varnish 2.1. That means that domain.com/robots.txt is served using Varnish and also subdomain.domain.com/robots.txt is also served using Varnish. The robots.txt must be hardcoded into default.vcl file. is that possible? I know Varnish can generate a maintenance page on error. I'm trying to make it generate a robots.txt file. Can anyone help? sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Maintenance in progress</title> </head> <body> <h1>Maintenance in progress</h1> </body> </html> "}; return (deliver); }

    Read the article

  • Rewrite for robots.txt and favicon.ico

    - by BHare
    I have setup some rules in which subdomains (my users) will default to where I have located the robots.txt, favicon.ico, and crossdomain.xml therefore if a user creates a site say testing.mywebsite.com and they don't make their own favicon.ico at testing.mywebsite.com/favicon.ico, then it will use the favicon.ico I have in /misc/favicon.ico This works perfect, but it doesn't work for the main website. If you attempt to go to mywebsite.com/favicon.ico it will check if "/" exists, in which it does. And then never redirects to /misc/favicon.ico How can I get it so both instances redirect to /misc/favicon.ico ? # Set all crossdomain (openpalace file) favorite icons and robots.txt doesnt exist on their # side, then redirect to site's just to have something to go on. RewriteCond %{REQUEST_URI} crossdomain.xml$ RewriteCond ^(.+)crossdomain.xml !-f RewriteRule ^(.*)$ /misc/crossdomain.xml [L] RewriteCond %{REQUEST_URI} favicon.ico$ RewriteCond ^(.+)favicon.ico !-f RewriteRule ^(.*)$ /misc/favicon.ico [L] RewriteCond %{REQUEST_URI} robots.txt$ RewriteCond ^(.+)robots.txt !-f RewriteRule ^(.*)$ /misc/robots.txt [L]

    Read the article

  • stored procedure for importing txt in sql server db

    - by Iulian
    I have to insert new records in a database every day from a text file ( tab delimited). I'm trying to make this into a stored procedure with a parameter for the file to read data from. CREATE PROCEDURE dbo.UpdateTable @FilePath BULK INSERT TMP_UPTable FROM @FilePath WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = '\t', ROWTERMINATOR = '\n' ) RETURN Then i would call this stored procedure from my code (C#) specifying the file to insert. This is obviously not working, so how can i do it ?

    Read the article

  • How do I use an include statement in a TXT record?

    - by Aglystas
    We have a client that is using an email service that requires a TXT domain key reocrd that is over 127 characters long. I'm pretty sure BIND allows this, however we run djbdns with tinydns and it looks as though it only supports txt records up to 127 characters. And the rest is being truncated. I was thinking I can do an include combining them, but I'm not really sure how. I was thinking of setting the value to somthing like... v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC2GWCNaDTuC3include:bdk2._domainkey.mail.cutlerymania.com My thought is, will this grab the actual value located at that domain which only has one record which is a TXT record and simply append that information so the entire key record gets sent correctly?

    Read the article

  • What are the main differences between SRV records and TXT records?

    - by Chris Adams
    Hi there, I'm trying to consolidate domain names for the servers I look after to just use one panel instead of 3 or 4, and one thing stopping me is that the provider I originally wanted to move them to only lets me the following kinds of records: A MX NS CNAME TXT The first four I understand, but I'm not sure about the relationship (if any) between SRV records and TXT records. Can I use TXT records in the place of SRV records? They both seem to be general text records to just point at a particular server without needing to specify a particular protocol, so it doesn't sound like a totally unreasonable assumption, but I'd rather check here before I break something. If I can only set the above records, does that mean I'm essentially unable to so any SRC record redirecting? Thanks!

    Read the article

  • How to format and where to put the SPF TXT record?

    - by YellowSquirrel
    EDIT I think I more or less understand the syntax and, anyway, Google is giving, in the link below, the syntax needed. My question is really where to put that stuff. Should I quote every field? The whole line? :) I've set up Google apps for my domain: I've registered the domain with Google by adding the CNAME Google asked and I've apparently succesfully setup the MX Google mail servers. So far I haven't yet a dedicated server: I'm just having a domain at a registrar. Now I want to activate SPF and I'm confused. In the following short webpage: http://www.google.com/support/a/bin/answer.py?answer=178723 it is written that I must add a TXT record containing: v=spf1 include:_spf.google.com ~all Where should I enter this? Should this go in the zone (?) file, like I did for the CNAME and the MX records? So far I have something like this: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. Does adding the SPF TXT record mean I should literally have something like that: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 3600 IN TXT "v=spf1 include:_spf.google.com ~all" @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. I made that one up and included right in the middle to show how confused I am. What I'd like to know is the exact syntax and where/how I should put this TXT record.

    Read the article

  • Execure a random command from .txt file?

    - by Alberto Burgos
    I have a Ubuntu server, and I'm trying to print a Twitter quote using the app "twidge". So I made a list of tweets on a .txt file. I want to print one tweet (per line) from that file and send it to Twitter via twidge (or what ever other method was possible). I can print a random phrase with shuf: shuf -n 1 /var/www/tweets.txt and it works. It sends me back one of the tweets, but, it does not send it to Twitter, even if the "in line" phrase is a command. i.e: twidge update "bla bla bla" It just prints on the screen, but don't send it to Twitter. I tried turning the .txt to .sh, but don't work... any idea? by the way, i want to use it with crontab, something like this: 15 * * * * shuf -n 1 /var/www/tweets.txt

    Read the article

  • pages still show up in google search even after disallowed in robots.txt [duplicate]

    - by Jota Onasys
    This question already has an answer here: With Robots.txt disallow all, why was my site still getting traffic? 5 answers Why is it that some pages still show up in google search even though disallowed in robots.txt? Is the best solution here to remove the Disallow from Robots.txt and just add noindex, nofollow meta tag to those pages you want blocked? Or should I submit a request to Google directly to remove those pages?

    Read the article

  • Generating CMakeLists.txt

    - by vanna
    I got a bunch of C++ sources files and headers. They may use external libraries such as Boost e.g. I am interested in the process of building binaries for Windows and *nix. Makefiles (*nix) and .vcproj (Windows) call compilers with some specifications such as the order of compilation, compilation options and stuff. CMakeLists.txt can be used by CMake to build either makefiles or .vcproj and use very helpful commands such as recursive search of files, automatic linkage with known libraries, installers, variables that can be used in source files... Is there any existing tool that would generate a CMakeLists.txt from specified options ? Options could be like : scan this folder and make a library out of it, then scan this other folder and make an executable and automatically link both with Boost as well along with a user friendly installer with generated INSTALL.txt and README.txt. Something very powerful like that.

    Read the article

  • Can search engine robots read file with permission 640?

    - by dkjain
    I am on a shared web hosting linux server. I want search engine robots/spiders to be able to read the robots.txt but not any one typing www.mysite.com/robots.txt. As per the following google group post, the user specifies that by setting file permission to 640, it's possible to deny access to robots.txt file by the world but still enable search engine robots to read them. Is that true? If not how it's possible to deny general public access to robots.txt but still allow Search engine robots to read them.

    Read the article

  • Google Webmaster Central tells me that robots is blocking access to the sitemap

    - by Gaia
    This is my robots.txt User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.mydomain.org/sitemap.xml.gz But Google Webmaster Central tells me that robots is blocking access to the sitemap: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit: URL restricted by robots.txt I read that Google Webmaster Central caches robots.txt, but the file has been updated more than 10 hours ago.

    Read the article

  • How to create robots.txt for a domain that contains international websites in subfolders?

    - by aaandre
    Hi, I am working on a site that has the following structure: site.com/us - us version site.com/uk - uk version site.com/jp - Japanese version etc. I would like to create a robots.txt that points the local search engines to a localized sitemap page and has them exclude everything else from the local listings. So, google.com (us) will index ONLY site.com/us and take in consideration site.com/us/sitemap.html google.co.uk will index only site.com/uk and site.com/uk/sitemap.html Same for the rest of the search engines, including Yahoo, Bing etc. Any idea on how to achieve this? Thank you!

    Read the article

  • Blocking Just the Parent Domain via robots.txt

    - by Bryan Hadaway
    Let's say you have a parent domain: parent.com and children subdomains under that parent domain: child1.com child2.com child3.com Is there a way to use just the following within parent.com: User-agent: * Disallow: / Considering each child has their own robots.txt stating: User-agent: * Allow: / Or is the parent robots.txt still going to have to make an exception for every single subdomain: User-agent: * Disallow: / Allow: /child1/ Allow: /child2/ Allow: /child3/ Obviously this is important and tricky territory SEO wise so I'm looking to learn the definitive and safe, best practice method here to sharpen my skills. Thanks, Bryan

    Read the article

  • Write/read count to txt file

    - by Brian
    Hi, I need a batch file that writes the count number to a txt file. Next time the batch file is run, it should read the current count number from the txt file and add 1 to count and save this new value in the txt file. (nothing else is in the txt file) When count is 5 it should start from 1 again Example: Count.bat runs 1 time: count.txt has no count so Count.bat saves the value 1 in count.txt Count.bat is run 2 time: Count.bat reads 1 from count.txt and saves the new value 2 to count.txt When count.bat is run for the 6 time it should start over by saving the value 1 in count.txt I think this just be easy to do, but I'am not use to batch commands So hopefully someone here could help me.

    Read the article

  • Google indexed page a day before also reflecting in search but today everything vanish

    - by ganesh
    We had robots.txt which disallow all robots as we were in development. We are live now. We change robots.txt as per our requirement a day before. Submit indexes using Google Webmaster Tools index status. After this we can see proper result in search as well as Google images search was working as expected. Suddenly today all these things vanish from Google Search. Now again I can see old result i.e. under construction message. I checked robots.txt in Google Webmaster Tools, it's ok - no crawling errors. Kindly let me know what exactly happened? How I can inform this issue to Google?

    Read the article

  • Prevent azure subdomain indexation

    - by Leg10n
    Let me explain my situation, I have an azure website (with azurewebsites.net sub domain), and a custom domain.com, built with asp.net MVC Both are being indexed by Google, but I've noticed the custom domain is being penalized and it doesn't show up in results, it only shows when I search for "site:domain.com" I want to remove and block the azurewebsites.net subdomain from Google. I've read the "possible" solutions: Adding robots.txt: won't work, because the subdomain and the domain are the exact same content, so subdomain.azures.net/robots.txt will lead to domain.com/robots.txt, removing the domain as well. Adding the tag, is the same situation as the previous point. I'm using a CNAME register to redirect the domain to the subdomain, so I can't redirect to a sub directory. Do you have any other ideas?

    Read the article

  • How could I use AJAX to create a Json data source .txt file?

    - by Adam
    I'm creating a form that collects standard information about customers. When the user hits save, I would like to create a .txt file that would be used to later retrieve all of the data collected from customers. I'm using DataTables which is a jQuery plugin to display the data. The .txt file would be formatted to be saved as such: { "aaData": [ ["client 1 name","address","city","state","zip"], ["client 2 name","address","city","state","zip"], ["client 3 name","address","city","state","zip"], ... ["client x name","address","city","state","zip"] ] } Where "aaData": is used by DataTables. This is going to part of an iPhone app, so the data source has to be very small and not reliant on a constant connection to a server, so, essentially, a client-side data source. The .txt file has to also be updated when edited and saved, and then replaced every time it is downloaded.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >