Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 181/389 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Single use download script - Modification [on hold]

    - by Iulius
    I have this Single use download script! This contain 3 php files: page.php , generate.php and variables.php. Page.php Code: <?php include("variables.php"); $key = trim($_SERVER['QUERY_STRING']); $keys = file('keys/keys'); $match = false; foreach($keys as &$one) { if(rtrim($one)==$key) { $match = true; $one = ''; } } file_put_contents('keys/keys',$keys); if($match !== false) { $contenttype = CONTENT_TYPE; $filename = SUGGESTED_FILENAME; readfile(PROTECTED_DOWNLOAD); exit; } else { ?> <html> <head> <meta http-equiv="refresh" content="1; url=http://docs.google.com/"> <title>Loading, please wait ...</title> </head> <body> Loading, please wait ... </body> </html> <?php } ?> Generate.php Code: <?php include("variables.php"); $password = trim($_SERVER['QUERY_STRING']); if($password == ADMIN_PASSWORD) { $new = uniqid('key',TRUE); if(!is_dir('keys')) { mkdir('keys'); $file = fopen('keys/.htaccess','w'); fwrite($file,"Order allow,deny\nDeny from all"); fclose($file); } $file = fopen('keys/keys','a'); fwrite($file,"{$new}\n"); fclose($file); ?> <html> <head> <title>Page created</title> <style> nl { font-family: monospace } </style> </head> <body> <h1>Page key created</h1> Your new single-use page link:<br> <nl> <?php echo "http://" . $_SERVER['HTTP_HOST'] . DOWNLOAD_PATH . "?" . $new; ?></nl> </body> </html> <?php } else { header("HTTP/1.0 404 Not Found"); } ?> And the last one Variables.php Code: <? define('PROTECTED_DOWNLOAD','download.php'); define('DOWNLOAD_PATH','/.work/page.php'); define('SUGGESTED_FILENAME','download-doc.php'); define('ADMIN_PASSWORD','1234'); define('EXPIRATION_DATE', '+36 hours'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: ".date('U', strtotime(EXPIRATION_DATE))); ?> The http://www.site.com/generate.php?1234 will generate a unique link like page.php?key1234567890. This link page.php?key1234567890 will be sent by email to my user. Now how can I generate a link like this page.php?key1234567890&[email protected] ? So I think I must access the generator page like this generate.php?1234&[email protected] . P.S. This variable will be posted on the download page by "Hello, " I tried everthing to complete this, and no luck. Thanks in advance for help.

    Read the article

  • Is it possible to use a VB master page to cover an entirely separate directory written in C#?

    - by Jason Weber
    I have a company website written in vb.net. There are 5 master pages. I recently began utilizing a forum application, also asp.net 4.0, but this one is written in C#. My forum directory is domain.com/knowledgebase/. Is there any possible way to take one of my vb.net master pages and somehow integrate into the /knowledgebase/ directory? Here's what's currently This is what's in the top of every page in my site: <%@ Page Title="USS Vision Inc." Language="VB" MasterPageFile="~/homepage.master" AutoEventWireup="false" CodeFile="default.aspx.vb" Inherits="_default" culture="auto" meta:resourcekey="PageResource1" uiculture="auto" Debug="true" %> This is what's in my /knowledgebase/ directory: <%@ Page Language="C#" AutoEventWireup="true" ValidateRequest="false" Inherits="YAF.ForumPageBase" culture="auto" uiculture="auto" %> <%@ Register TagPrefix="YAF" Assembly="YAF" Namespace="YAF" %> <script runat="server"> Is it somehow possible to use, for instance, homepage.master in the /knowledgebase/ directory? If so, how would I accomplish this? Thanks for any guidance anybody can offer!

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • url changed to www...how much time google take to reindex

    - by user20321
    its been about a week i have changed my url from non www to www version it was done by my host provider .....now i want to ask how much time does google take to remove my sitename.com from indexing and replace it with www.sitename.com (site redirection has been already done by host provider )as it is still showing old ones my new urls are indexed but my main url www.sitename.com is not indexed...or so do i have to remove those old urls personally...its been already about 5-6 days?????

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

  • Working with different URL structures

    - by Dane411
    As I'm quite newbie to this field, I've doubts and there are some I couldn't find on Google, i.e: If I'm not wrong, index.html makes it possible to avoid to add the filename to the url, www.example.com/ is equal to www.example.com/index.html. And that works for the following subdirectories, right? www.example.com/music/ Is there any other way to achieve this without using an index.html file? (I've read smth about converting dynamic urls to static: ./?var1=value1&varN=valueN - ./value1/valueN) How can I convert www.example.com/music/ to music.example.com/ and why should it be used? Thanks in advance!

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • YouTube custom thumbnails feature availability

    - by skat
    I've been trying to figure out this on my own for weeks, but now I give up. 'Custom thumbnails' feature on YouTube is such a controversial one, it was changed so much... so much that even FAQ on YouTube doesn't fully describe it's features (as I see). I have a YouTube channel for one of my websites. This YouTube channel is main marketing force for my website - it brings all the boys to my yard (I mean, website). So I have to use all the hacky-tricky stuff to increase my visibility on youtube. And damn, those custom thumbnails are giving me hard times... As far as I understand, this is current state of 'custom thumbnail' feature: "If your account is in good standing, you may have the ability to upload custom thumbnails for your video uploads." (c) https://support.google.com/youtube/answer/138008 My channel has good standing, has more than 50000 views. So why the hell my account is still not eligible for this feature? anyone have any idea?

    Read the article

  • Looking for free, specific Ip2Location Database

    - by Andresch Serj
    I am searching for a free db (like an updated XML or CSV file) that relates IP addresses to specific locations. I want more information than just the country. I want some sort of region or city reference, even if that ends up to be a number that makes no sense to me. Doesn't have to be super correct or always up to date either. It is just to distinguish between user groups and not to monitor or spy on them.

    Read the article

  • Specific website not working on my connection [closed]

    - by Tsury
    For some reason, the website http://www.woopra.com/ doesn't work well (shows as plain html, no css/images/JS) on my computer and my Android tablet (they both share the same internet connection). It does however, work on my WP 7.5 phone (3G). When I try to set a wifi hotspot and use the tablet it still doesn't work (tried to clean cache). I tried FF, Chrome and IE (multiple versions). What is going on here? I'm clueless! Thanks.

    Read the article

  • Googlebot requesting invalid url

    - by Rob Walker
    I have a web app which emails me exceptions automatically. This morning there was an error relating to a url: /Catalog/LiveCatalog?id=ylwpfqzts id is invalid (should be a guid) and caused an error parsing. Everything was handled correctly, and an error page is returned. But what was odd is that the user-agent reported itself as Googlebot and the IP is registered to Google. The URL would never have been generated by my web app but doesn't look particularly malicious. Anyone ever seen anything like this?

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • custom domain point to tumblr blog

    - by Julius
    My domain mydomain.com is registered with godaddy. I wish to host my tumblr blog on this domain with nearlyfreespeech.net hosting. My active nameservers at godaddy already point to my authoritative ones at NFS.net which is working. However i'm baffled of the correct configuration to set to point to my Tumblr. Preferably id like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My 3rd preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and i guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent NFS ip at mydomain.nfshost.com and when i try to add: (1) an A record pointing mydomain.com to the ip 66.6.44.4 as per Tumblr's instructions it tells me i already have the bare domain as an alias so i cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when i tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was them both continually redirecting to eachother until an error was thrown. So i was wondering if there is a ninja that could save me some hairpulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • Structured data: Field missing: price [on hold]

    - by Handi Occasion
    I just set up my site to make it better thanks to the micro-referenced data. After finishing the creation of meta-data, I checked via webmaster tool the result of my work and my a priori data are taken into account (see here). Today I had a look through the webmaster tools - Structured Data and then surprise! I have a groin ad 50 with the error field missing: price while the price is this! any idea? thank you

    Read the article

  • Attach a Wordpress.org blog to my BigCommerece Store as a sub-domain

    - by user1323814
    I am stuck in a peculiar situation. I have a store on BigCommerece configured with a domain from GoDaddy (mystore.com). I recently created a custom wordpress blog and hosted it on 1and1 hosting (s418783372.onlinehome.us), since bigcommerece can't host Wordpress. Now I want to use it from a sub-domain of my main-bigcommerece store (models.mystore.com), but it doesn't seem to be working since BigCommerce is the Doman Manager, but GoDaddy is the Domain-Registrar and 1and1 is the host so it doesn't control the domain. I have tried setting up a CNAME record on BigCommerece and when it didn't work asked BigCommerece about it, but they said they can't do anything about it since they aren't the domain registrar and gave me a message saying: The responsiblity to show the name in the browser on the site is up to the server or site admin. The Cname can only get the browser there

    Read the article

  • SEO and multiple domains to same site

    - by mwb
    I have one website. I have two domain names that I want to point to the same site install. So whether you go to name-one.com or name-two.com you see the exact same site. Now, I can either set up name-two.com to serve 301 redirect header redirecting to name-one.com – or, I can set up name-two.com as a CNAME in the DNS pointing to name-one.com What is the different implications for SEO on this? What is recommended? I would guess it's better for branding to use a 301 redirect, so that visitors will see one consistent url for my site, right? The reason I want the two domains is that I want a version with regional letters ('ö' instead of 'oe' ) in the name.

    Read the article

  • guest blogging, recipricle links & nofollow

    - by sam
    When writing a guest blog for a site and in the blog post i write i link back to my self, that counts as an imbound link. Writing something on my blog, like "have a look at this post i wrote for _" made that a link the links would be recipricle, correct ? thus cancling each other out.. If i was to to make the link back to my article a nofollow link then would i still get the link juice ? If i write guest blog post and the site want to also write a guest blog on my site later on whats the best way to handle it as wont these both cancel each other out and have no effect ?

    Read the article

  • Is there an elegant way to track multiple domains under separate accounts with google analytics?

    - by J_M_J
    I have a situation where a content management system uses the same template for multiple websites with different domain names and I can't make a separate template for each. However, each website needs to be tracked with Google analytics. Would this be appropriate to track each domain like this by putting in some conditional code? And would this be robust enough not to break? Is there a more elegant way to do this? <script type="text/javascript"> var _gaq = _gaq || []; switch (location.hostname){ case 'www.aaa.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-1']); break; case 'www.bbb.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-2']); break; case 'www.ccc.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-3']); break; } _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script> Just to be clear, each website is a separate domain name and must be tracked separately, NOT different domains with same pages on one analytics profile.

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: [email protected] To: [email protected] Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: [email protected] To: [email protected] Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • Getting link to abstract indexed in Google Scholar

    - by JordanReiter
    We have a large digital library with thousands of papers indexed in Google Scholar. We allow Google Scholar to index our PDFs but they're blocked unless you have a subscription. So Google has full-text indexing/searching of our PDFs (great!) but then the links point just to those PDFs (boo!) instead of the more helpful abstract pages. Does anyone know what could cause an issue like this? I am, to the best of my knowledge, following all of the guidelines laid out in their Inclusion Guidelines. Here's some example meta data: <meta name="citation_title" content="Sample Title"/> <meta name="citation_author" content="LastName, FirstName"/> <meta name="citation_publication_date" content="2012/06/26"/> <meta name="citation_volume" content="1"/> <meta name="citation_issue" content="1"/> <meta name="citation_firstpage" content="10"/> <meta name="citation_lastpage" content="20"/> <meta name="citation_conference_title" content="Name of the Conference"/> <meta name="citation_isbn" content="1-234567-89-X"/> <meta name="citation_pdf_url" content="http://www.example.org/p/1234/proceeding_1234.pdf"/> <meta name="citation_fulltext_html_url" content="http://www.example.org/f/1234/"/> <meta name="citation_abstract_html_url" content="http://www.example.org/p/1234/"/> <link rel="canonical" href="http://www.example.org/p/1234/" /> example.org/p/1234 is the abstract page for the article; example.org/f/1234 is the fulltext link accessible to subscribers only (and to Google Scholar). example.org/p/1234/proceeding_1234.pdf is the fulltext PDF link.

    Read the article

  • htaccess url rewrite

    - by user761396
    i used to have a rewrite rule to RewriteRule ^([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] now, i have to change to RewriteRule ^1/([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^1/([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] RewriteRule ^2/([^/]+)\.htm$ index2.php?c=$1 [NC] RewriteRule ^2/([^/]+)\.htm/([0-9.]+)$ index2.php?c=$1&amt=$2 [NC] (the difference is adding a subdirectory) my question is how to redirect my old ones to the 1/ subdirectory? thank you

    Read the article

  • Best stats tool for cross-domain traacking

    - by kidbrax
    We build a webapp that allows users to run the app under their own subdomain. So we run the app under search.domainX.com, search.domainY.com and so on. They each have their own Google Analytics to track individual stats. But we want to know what general traffic for all clients of our app. So we want to know stuff like "among all our clients we had x number of views." What is the best way tool to track that sort of thing.

    Read the article

  • is redirecting mydomain.eu & mydomain.net to mydomain.com using .htacess spammy?

    - by sam
    a client has asked my to develop their site, they already own 3 domains, mydomain.eu, .net and .com they want all the traffic from .eu and .net to redirect to .com, ive explained to them that it is not that relevant as people will search for them in search engines rather than typing in the domain, but they still would like me to do it. As far as i know this is fine to do from an Seo point of view but i thought id just double check ..

    Read the article

  • CNAME cross subdomain mapping problem

    - by Ron Ranieri
    I have the exact same problem as this but I don't understand the solution. I have 2 different domain name(with different hosting/provider) domain-one.com and domain-two.com which I want sub.domain-one.com to point to sub.domain-two.com using CNAME. All have been setup and propagate correctly but now when I go to sub.domain-one.com it goes to basic page of the server. If I go directly to sub.domain-two.com the page load correctly. I've tried to add the subdomain using Addon Domains on cPanel but the problem persist.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >