Search Results

Search found 5530 results on 222 pages for 'nested urls'.

Page 13/222 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Using Google Analytics tracking URLs in Facebook ads

    - by Ted
    I generated the following Google Analytics tracking URL to use in a Facebook ad: https://www.somewebsite.org/?utm_source=facebook&utm_medium=cpc&utm_term=schools&utm_content=newsfeed&utm_campaign=facebookad3 I know the ad is being clicked (Facebook ad manager data) but the referred traffic is not appearing in my site's Google Analytics data. I think it's because Facebook is doing some weird redirect URL modifying. Any ideas?

    Read the article

  • Clean URLs issue using .htaccess in PHP project

    - by x4ph4r
    I am working on a PHP laravel project. I am currently facing issues with .htaccess file. I have following .htaccess: <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> When I reload my page the it gave me following error: 404 Not Found The requested URL /contacts was not found on this server. Then I opened /etc/apache2/users/username.conf file which had following line of code: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> In above code I changed AllowOverride None to AllowOverride All. Then I reload page and got following error: 403 Forbidden You don't have permission to access /contacts on this server. When I add FollowSymLinks to .htaccess file Options such as like this Options -MultiViews FollowSymLinks. Then sometimes I get this 500 Internal Server Error error and sometime this *Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data*. Each time I reload my page one of these errors with FollowSymLinks option. I also uncomment following lines in /etc/apache2/httpd.conf: LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule php5_module libexec/apache2/libphp5.so and still I am getting same permission denied error. Please help me I am trying to solve this problem for past 3 days but it is till unresolved.

    Read the article

  • Google Webmaster Tools is showing duplicate URLs based on page title differences

    - by Praveen Reddy
    I have 700+ title tag duplicates showing in WMT. Every first link in that picture is as duplicate link of second one. I don't know from where the first link got indexed by Google when that link doesn't exist in the site. It's showing the title of every page as link. Original link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland. Duplicate link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland-Ireland. How can this happen? I have checked entire site I didn't find where the second version is linked. I have no images linked to with duplicated version of URL.

    Read the article

  • Serve most of a domain with Apache, but use mod_proxy to serve some URLs from Lighttpd

    - by Alex Pineda
    So we wish to host some pages on a new server with apache2, and embed some of our old content & functionality from another server with lighttpd in an iframe. I'm looking at this configuration from the apache docs (http://httpd.apache.org/docs/2.2/vhosts/examples.html#page-header) under "Using Virtual_host and mod_proxy" together. <VirtualHost *:*> ProxyPreserveHost On ProxyPass / http://192.168.111.2/ ProxyPassReverse / http://192.168.111.2/ ServerName hostname.example.com </VirtualHost> The only issue is that I want to proxy only on a subdomain, or even better, if I can keep the top domain and proxy only if the url contains a particular path ie. "/myprocess.php". So in essence the DNS will point to the apache2 as the "master router".

    Read the article

  • still getting 403 on apt-get install: no proxy, urls seems valid

    - by Berry Tsakala
    i'm trying to install libreoffice (or openoffice) on Ubuntu server 12.10, the packages exist - verified with "apt-cache search", the file /etc/apt/apt.conf.d/30proxy doesn't exist on my system the text 'proxy' isn't mention in grep proxy /etc/apt/apt.conf.d/ other packages that i tried to apt-get-install -- are installed OK. the only thing i haven't done is to replace the respository servers; i'm afraid it can break the dpkg system! related questions http://askubuntu.com/questions/304340/apt-get-403-forbidden?rq=1 http://askubuntu.com/questions/303150/apt-get-403-forbidden-but-accessible-in-the-browser http://askubuntu.com/questions/409998/proxy-blocking-apt-get-allowing-wget-curl http://askubuntu.com/questions/367737/apt-get-upgrade-gives-403-forbidden-error?rq=1 What else can I do to solve this 403 error and install liber/open-office using apt?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    Hi, I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • How to structure a set of RESTful URLs

    - by meetamit
    Kind of a REST lightweight here... Wondering which url scheme is more appropriate for a stock market data app (BTW, all queries will be GETs as the client doesn't modify data): Scheme 1 examples: /stocks/ABC/news /indexes/XYZ/news /stocks/ABC/time_series/daily /stocks/ABC/time_series/weekly /groups/ABC/time_series/daily /groups/ABC/time_series/weekly Scheme 2 examples: /news/stock/ABC /news/index/XYZ /time_series/stock/ABC/daily /time_series/stock/ABC/weekly /time_series/index/XYZ/daily /time_series/index/XYZ/weekly Scheme 3 examples: /news/stock/ABC /news/index/XYZ /time_series/daily/stock/ABC /time_series/weekly/stock/ABC /time_series/daily/index/XYZ /time_series/weekly/index/XYZ Scheme 4: Something else??? The point is that for any data being requested, the url needs to encapsulate whether an item is a Stock or an Index. And, with all the RESTful talk about resources I'm confused about whether my primary resource is the stock & index or the time_series & news. Sorry if this is a silly question :/ Thanks!

    Read the article

  • Will Google Analytics track URLs that just redirect?

    - by Derick Bailey
    I have a link on my site. That links goes to another URL on my site. The code on the server sees that resource being requested and redirects the browser to another website. Will Google Analytics be able to know that the user requested the URL from my server and was redirected? Specifically, I set up a /buy link on my watchmecode.net site to try and track who is clicking the "Buy & Download" button. This link/button hits my server, and my server immediately does a redirect to the PayPal processing so the user can buy the screencast. Is Google Analytics going to know that the user hit the /buy URL on my site, and track that for me? If not, what can I do to make that happen?

    Read the article

  • IIS 6 nested virtual directory redirection

    - by threedaysatsea
    We're running IIS 6 on a WinServer2k3 box and we're having some trouble with the following problem: E-mails were sent out to users asking them to go to the following URL: alias.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue However, the URLS are actually supposed to be: server.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue It's too late to recall all of the e-mails, and we'd like to redirect traffic to make this as seamless as possible for our users. The real problem here is that the server (server.contoso.com) is hosting the alias (alias.contoso.com) as a redirect thusly, and the existing redirect we need to keep functional: Default Web Site (server.contoso.com) --Directory1 --Directory2 --Directory3 Redirection to Directory3 (alias.contoso.com) --Essentially alias.contoso.com will take the user to server.contoso.com/Directory3 Is there any way to host a separate redirect inside of the existing redirect? We need to keep alias.contoso.com taking the user to server.contoso.com/Directory3 but also make alias.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue point to server.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue Any tips? Is this even possible?

    Read the article

  • Search engine bots accessing strange URLs

    - by casasoft
    We have ELMAH enabled on our site and get errors whenever a Page Not Found error is triggered on the website. We have recently redesigned a new website and so we understand that search engine robots might have previously indexed pages which they try to access and result in a Page Not Found errors. For this reason, we have set up permanent redirects for such previously indexed pages to the respective new pages. The website in mention is www.chambercollege.com and for example, a previously indexed URL was www.chambercollege.com/special-offers.aspx. This page is no longer accessible so we have created the necessary permanent redirect to redirect to the respective page on www.chambercollege.com/en/content/special-offers-161/. Now we are starting to receive Page Not Found errors of search engine bots (e.g. MSN bot) trying to access the URL www.chambercollege.com/special-offers.aspx/images/shadow_right.jpg/. Any idea how could a search engine make up that strange URL and whether you have any suggestions of what to do best?

    Read the article

  • Share buttons vs sharer urls

    - by TeeOh
    As some people might know, adding share buttons from Facebook and Twitter can cause a page to slow down. I've seen many sites pass on the common iframe implementations that these sites offer and simply create icons that link to a sharer url for better control of page performance. http://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.cnn.com%2F&t=CNN%26s+website%27 However, I've also read that Facebook is dropping support for these links. For example, this link now redirects to the Like Button. http://www.facebook.com/facebook-widgets/share.php Here is an article noting that Facebook is deprecating/has deprecated it's share functionality and is sticking with the Like button. http://www.barbariangroup.com/posts/7544-the_facebook_share_button_has_been_deprecated_called_it I'm assuming this is the same for the sharing url. If the sharer url is no longer a reliable option, what other methods are there besides using 3rd party widgets (like Addthis)?

    Read the article

  • RewriteRule for URLs with spaces

    - by Robert Cailliau
    My site's pages are in multiple languages whereby each language version shares its media (images) with the other language versions. I place all versions and the media in a single directory with the same name. E.g. pages mypage-en.html, mypage-fr.html etc. will sit in directory mypage. The directory path suffices to reference a page: h t t p : //....../mypage/ is good enough, there is no need for h t t p : //....../mypage/mypage-en/html A rewrite with RewriteRule ^(.*)/([a-zA-Z0-9]+)/?$ /$1/$2/$2-en.html lets me use the shorter form. But what if the name mypage contains spaces (which some do) ? I want h t t p : //....../my page/ to lead to h t t p : //....../my page/my page.html Using RewriteRule ^(.*)/([a-zA-Z0-9|\s]+)/?$ /$1/$2/$2-en.html did not work. Any hints welcome. (please do not ask me why I want to do this, nor tell me I should not use spaces in file names)

    Read the article

  • How to manually list set of urls for search engines to index

    - by MarutiB
    So I have created a video website which has thousands of videos and thousands of videos get added to it on a daily basis. Here is my problem :- I have created a website which basically loads the skeleton in html and puts all the content through javascript and Ajax. The problem is search engines aren't going anywhere except for the home page. Is there a way say in robots.txt where i give a link to a single html which has links to all these videos ? I agree my site is not accessible for a non-javascript user but stats show that this ratio is very low ( 0.2 % ). Is there a way I can still keep the complete AJAX website and still get each individual videos listed on google ?

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Apache2 - rewrite a bunch of specified pathname URLs to one URL

    - by James Nine
    I need to rewrite a bunch of urls (about 100 or so) for SEO purposes, and there may be more being added in the future (probably another 50-100 later on). I need a flexible way of doing this and so far, the only way I can think of is to edit the .htaccess file using the rewrite engine. For example, I have a bunch of urls like this (please note that the query string is irrelevant, and dynamic; it could be anything. I was only using them purely as an example. I am only focusing on the pathname--the part between the hostname and query string, as marked in bold below): http://example.com/seo_term1?utm_source=google&utm_medium=cpc&utm_campaign=seo_term http://example.com/another_seo_term2?utm_source=facebook&utm_medium=cpc&utm_campaign=seo_term http://example.com/yet_another_seo_term3?utm_source=example_ad_network&utm_medium=cpc&utm_campaign=seo_term http://example.com/foobar_seo_term4 http://example.com/blah_seo_term5?test=1 etc... And they are all being rewritten to (for now): http://example.com/ What's the most efficient/effective way of doing this so that I may be able to add more terms in the future? One solution I came across is to do this (in the .htaccess file): RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ / [NC,QSA] However, the problem with this solution is that even invalid urls (such as http://example.com/blah) will be rewritten to http://example.com instead of giving a 404 code (which is what it is supposed to do anyway). I'm still trying to figure out how all this works, and the only way I can think of is to write 100 more RewriteCond statements (such as: RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR]) before the RewriteRule directive. For example: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR] RewriteCond %{REQUEST_URI} =/another_seo_term2 [NC,OR] RewriteCond %{REQUEST_URI} =/yet_another_seo_term3 [NC,OR] RewriteCond %{REQUEST_URI} =/foobar_seo_term4 [NC,OR] RewriteCond %{REQUEST_URI} =/blah_seo_term5 [NC] RewriteRule ^(.*)$ / [NC,QSA] But that doesn't sound very efficient to me. Is there a better way?

    Read the article

  • What is the best way to add and order to Doctrine Nested Set Trees?

    - by murze
    What is the best way to add a sense of order in Doctrine Nested Sets? The documention contains several examples of how to get al the childeren of a specific node $category->getNode()->getSiblings() But how can I for example: change the position of the fourth sibling to the second position get only the second sibling add a sibling between the second and third child etc... Do I have to manually add and ordercolumn to the model to do these operations?

    Read the article

  • Django admin urls return INVALID REQUEST! - Django

    - by RadiantHex
    Hi folks, my admin urls are sat behind a prefix by doing the following. 1# (r'^admin/', include(admin.site.urls)), is placed within urls_core.py 2# (r'^api/', include('project.urls_core')), is palced within urls.py All admin URLs work fine except app indexes. If I go to any URL such as: /api/admin/core/ /api/admin/registration/ /api/admin/users/ /api/admin/filters/ I receive 'INVALID REQUEST' as my response. Status code is 200 (OK) though. I have never received this error message before. Does anyone have a clue? Thanks guys!

    Read the article

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • Getting a full list of the URLS in a rails application

    - by Laurie Young
    How do I get a a complete list of all the urls that my rails application could generate? I don't want the routes that I get get form rake routes, instead I want to get the actul URLs corrosponding to all the dynmically generated pages in my application... Is this even possible? (Background: I'm doing this because I want a complete list of URLs for some load testing I want to do, which has to cover the entire breadth of the application)

    Read the article

  • groovy thread for urls

    - by Srinath
    I wrote logic for testing urls using threads. This works good for less number of urls and failing with more than 400 urls to check . class URL extends Thread{ def valid def url URL( url ) { this.url = url } void run() { try { def connection = url.toURL().openConnection() connection.setConnectTimeout(10000) if(connection.responseCode == 200 ){ valid = Boolean.TRUE }else{ valid = Boolean.FALSE } } catch ( Exception e ) { valid = Boolean.FALSE } } } def threads = []; urls.each { ur - def reader = new URL(ur) reader.start() threads.add(reader); } while (threads.size() 0) { for(int i =0; i < threads.size();i++) { def tr = threads.get(i); if (!tr.isAlive()) { if(tr.valid == true){ threads.remove(i); i--; }else{ threads.remove(i); i--; } } } Could any one please tell me how to optimize the logic and where i was going wrong . thanks in advance.

    Read the article

  • Cron job for list of urls

    - by mathew
    Duplicate http://stackoverflow.com/questions/2968918/how-do-i-do-cron-job-for-list-of-urls-closed sorry for the confusion... Ho do I do cron job using a list of urls?? actually the search path is mention below www.mydomain.com/search?url=www.google.com after google.com another url which is in the urllist.txt should be the next. so at the end of the day the whole list of urls in urllist.txt will be completed and saved in my database. what I need is how do I array urls in search path from urllist.txt.....and put it in cron job rest I can handle Thanks Mathew

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >