Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 484/854 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • 301 re-direct all external links to new domain

    - by Dean Legg
    I have changed the main domain to a sub-domain & would like to re-direct all external links to the new sub domain. Have read a few articles but having no luck editing the .htaccess as it might be interfering with all the rules in there. Old: www.example.co.uk New: https://secure.example.co.uk The current rules are quite handy because it seems to have sorted out the structure for all internal links. It has even updated the file path for images (or this could just be wordpress as the url was updated under general settings). This is the current .htaccess <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Will this sitemap get me de indexed from Google?

    - by heavy rocker dude
    My site's URL (web address) is: http://quantlabs.net/private/sitemap.xml Description (including timeline of any changes made): Will this sitemap get me de-indexed from Google? My new site map just got spidered by Google for some reason. It is located at http://quantlabs.net/private/sitemap.xml, is this in danger of getting me de-indexed from Google's index. Does it look like spam even though it is not meant to be? I am trying to figure the limitation in terms of Google's threshold before they deem it a spammy sitemap. This is sitemap contains automated postings which are different with the stock symbol provided. The amount of postings within the Sitemap are quite a few in a small amount of time.

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt74tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes

    Mise à jour du 07.05.2010 par Katleen Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes Il y a quelques heures, les trois premiers noms de domaines non-latins on été placé dans la root zone du DNS. Ils sont donc désormais en service, et fonctionnent parfaitement. Voici un exemple de ce que vous pourrez voir dans le champ d'URL de votre navigateur, si vous visitez l'un de ces sites : [IMG]http://blog.icann.org/wp-content/uploads/2010/05/idn-example-450px.png[/IMG] Ces trois nouveaux domaines sont السعودية. (?Al-Saudiah?), امارات. ( ?Emarat?) et ...

    Read the article

  • Pidgin unable to connect to GTalk

    - by user42933
    I can't believe I'm raising this question after years, but after a fresh installation of Ubuntu 12.10, I'm unable to connect Pidgin to Google Talk. I use a Google Apps domain name, and the settings that I'm using are : Protocol : XMPP Username : **** Domain : ********.com Resource : Home In the advanced tab, Connection security : Require encryption. UNCHECK Allow plaintext auth over unencrypted steams Connect port : 5222 Connect sever : talk.google.com File transfer proxies : proxy.eu.jabber.org Bosh URL : (blank) In the proxy tab, No proxy. I had used these same settings on 12.04 and it had worked like a charm. Any help will be appreciated.

    Read the article

  • apache2 server returns (400) syntax error

    - by Thomas E
    There are 900 paths in the googles index to our homepage containing illegal characters. Example: http://www.seriesam.com/filmaffisch/TC%4NK Note the character "%4N". I have no idea where they come from, but would like to update google index with a correct URL using "canonical" in the html code. But the problem is our apache2 server immediately sends a 400 error if you click the link above. How can I configure apache2 not to give an error code, but instead treat the link above as "correct"? Maybe replacing the char %4N with nothing.

    Read the article

  • wrong SEO result for staging website

    - by OCD
    Hi, We have a domain with URL, lets say: http://www.xxxxstaging.com/abc.php?id=10 which is having a pretty good ranking for some keywords in google. However, this is our staging website and due to some historical reason, it was exposed to public. Now we have a production domain (the one we want to be publicly available), say: http://www.xxxxproduction.com/abc.php?id=10 which have exactly the same html output as xxxxstaging.com our question is, for the exactly same keyword search, it never appear in any page of google. Is it possible for us to "switch" the ranking in google, either tell the robot that they are indeed the same website, or tell google to change it for us? Thank you.

    Read the article

  • ERROR CHECKING !!

    - by moata_u
    am trying catch any error when run command in order to write an log file / report i was trying write this code : FUNCTION FOR VALIDATION function valid (){ if [ $? -eq 0 ]; then echo "$var1" ": status : OK" else echo "$var1" ": status : ERROR" fi COMMAND FUNCTION function save(){ sed -i "/:@/c connection.url=jdbc:oracle:thin:@$ip:1521:$dataBase" $search var1="adding database ip" valid $var1 sed -i "/connection.username/c connection.username=$name" #$search retval=$? var1="addning database SID" valid $var1 $retval } save OUTPUT adding database ip : status : OK sed: no input file i want out put in this way: adding database ip : status : OK sed: no input file : status : ERROR" (OR) adding database ip : status : OK addning database SID : status : ERROR" I was tried toooo much but not working with me :(((

    Read the article

  • AGPL License - does it apply in this scenario?

    - by user1645310
    There is an AGPLv3 based software (Client) that makes web service calls (using SOAP) to another software (Server - commercial, cloud based). There is no common code or any connection whatsoever between these two except for the web service calls being made. My questions - Does the Server need to be AGPL too? I guess not - but would like to confirm. Let us say the end point URL for the Server can be configured on the Client side (by editing an XML file) to connect it to different Servers (again, there is no connection other than the webservice calls being made) does it require any of these Servers being AGPL? Are there any issues in running the Client as a DLL that is loaded by other commercial applications on users' desktops? Does it require these other applications also to be AGPL?

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • RewriteRule working local but not on remote server

    - by m0tv
    I have a .htaccess file with one simple RewriteRule: RewriteEngine on RewriteRule ^([A-Za-z0-9-]+)$ ?site=$1 I want to have an url like http://www.example.com/imprint and forward it to http://www.example.com/?site=imprint I checked this rule with an RewriteRule tester which gave me the results I want to achieve. On my local development system it works well too. But on a remote server the URLs just give me a 404 error. Other more simple rewrite rules are working with no problems, so everything must be set up correctly (I think..). The problem is that I don't have access to any error logs or the server configs. So the only thing I can do is to guess... Can anyone tell me if theres something wrong with this rule? Or anything else I can do or test to solve this? Or has someone an idea what could be wrong on the server?

    Read the article

  • Will removing unused query string parameters negatively affect SEO?

    - by trm
    Will changing links to remove query string parameters that are no longer used have any negative impact on search engine rankings? Say I have a page about.php on my site, and all of my links to this page are of the form http://www.example.com/about.php?foo=bar and I've made some changes to the script such that the parameter foo is no longer used. I would like to remove the unused parameter from the links so the URL will look cleaner, but I am concerned that this could cause problems with SEO. Is it safe to remove ?foo=bar from my links?

    Read the article

  • Which torrent client has command line arguments to start/stop downloads?

    - by virpara
    first of all, I want to create shell script to start/stop downloads in torrent client. I don't need CLI but if you know how I can do that with CLI using shell script then it is okay. I use jDownloader which is GUI based application but has some command line arguments as below which I use to start/stop download. -h/--help Show this help message -a/--add-link(s) Add links -co/--add-container(s) Add containers -d/--start-download Start download -D/--stop-download Stop download -H/--hide Don't open Linkgrabber when adding Links -m/--minimize Minimize download window -f/--focus Get jD to foreground/focus -s/--show Show JAC prepared captchas -t/--train Train a JAC method -r/--reconnect Perform a Reconnect -C/--captcha <filepath or url> <method> Get code from image using JAntiCaptcha -p/--add-password(s) Add passwords -n --new-instance Force new instance if another jD is running So I can easily start/stop download as follows, jdownloader --start-download jdownloader --stop-download now I want torrent client to do that through shell script.

    Read the article

  • subdomain not working and added /mysubdomains/devsitename

    - by krish
    I am having a site www.example.com which working fine and I have a number of sub-domains which are working fine except one. When I gave the url subdomain.example.com the address bar showing as below subdomain.example.com --> www.subdomain.example.com/mysubdomains/devsitename It added the www and the /mysubdomain/devsitename which is my hosted directory in my server. Then it came up with the website you were looking for is unavailable. Has anyone experienced this issue? Do you know how to resolve this?

    Read the article

  • Adding your website to free web directories as a link building strategy

    - by Man
    It's been two month I've launched a website. I recently ran into some websites which list directory of other websites. Some examples can be http://www.addgoogleurl.com/ and webdirectorieslist.com, etc. I was talking with my colleague and he says, adding the URL of my website to these kind of websites will negate the effect of other organic real links. Does google consider positive/negative points for these kind of links from web directory websites? Do you have any source for your answer to refer to? I found this question asked before on webmasters.SE, this is asking about many links from a website.

    Read the article

  • New version: Sun Rack II capacity calculator

    - by uwes
    A new release of the Sun Rack II capacity calculator is available on eSTEP portal. The tool calculates all the data necessary (power requirements, BTU, number of rack units, needed power outlets etc.) while inserting the many different kind of HW equipment in aSun Rack II cabinet (version 1000 and 1200). It takes into consideration most of the available servers, storage devices, tapes, and Netra products. There are also a couple of third party products which are taken into account. The spreadsheet can be downloaded from eSTEP portal. URL: http://launch.oracle.com/ PIN: eSTEP_2011 The material can be found under tab eSTEP Download.

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Grub can't find device on boot resulting in Grub Rescue

    - by user1160163
    So I have 2 hard drives a HDD 320GB and a SSD 20GB. Before I had Windows 7 on the HDD and Ubuntu on the SSD but wanted to get rid of windows and reinstall a clean Ubuntu on the SSD then use the HDD for storage. So I deleted everything from the HDD and set up the SSD with 18GB ext4 and 2GB Swap and installed Ubuntu on the 18GB ext4. Though now when I boot up I get "Error: No such device Grub Rescue" I have a live USB and I ran the Boot Repair following these instructions - grub rescue after install of Ubuntu 12.04 (dual boot) - it says successful though still have the same problem. This is the given URL from Boot Repair - http://paste.ubuntu.com/1257988/ Thanks for any help given.

    Read the article

  • VirtualBox Port Forward

    - by john.graves(at)oracle.com
    A great new feature in VirtualBox 4.0 is the ability to use NAT networking and forward ports without needing to use ssh -L/-R tricks.  This is great for booting multiple VM domains simultaneously.  It is possible to have several instances which map back to the host machine and different ports on localhost:* automatically forward to the correct VM.  This avoids the hassle of setting up dns entries or static IP addresses.In this example, I'm mapping the host ports 3xxxx to the VM's well known server ports.Note: It is important to setup the Frontend HTTP host/port to avoid incorrect URL rewriting.You may also need to setup an http channel to deal with local traffic which uses the network address 10.0.2.15Happy VMing.

    Read the article

  • Google Maps keeps displaying in Spanish

    - by Ken Hortsch
    Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2013/11/12/154610.aspxIn Chrome I use Google Maps as a search provider.  That way I can just type maps for the URL address, hit a couple of tab keys, and enter the maps address and have the page rendered with my map.  Now periodically maps were displaying in Spanish with a "click here to translate to English” option.  Huh?  My language settings on the browser and within Google settings all were English.  Turns out I had set my Chrome search provider string to include a language query parm=es.  Why would I do that?  Evil twin perhaps.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >