Search Results

Search found 6362 results on 255 pages for 'django urls'.

Page 132/255 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • How can I send GET data to multiple URLs at the same time using cURL?

    - by Rob
    My apologies, I've actually asked this question multiple times, but never quite understood the answers. Here is my current code: while($resultSet = mysql_fetch_array($SQL)){ $ch = curl_init($resultSet['url'] . $fullcurl); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //Only load it for two seconds (Long enough to send the data) curl_exec($ch); //Execute the cURL curl_close($ch); //Close it off } //end while loop What I'm doing here, is taking URLs from a MySQL Database ($resultSet['url']), appending some extra variables to it, just some GET data ($fullcurl), and simply requesting the pages. This starts the script running on those pages, and that's all that this script needs to do, is start those scripts. It doesn't need to return any output. Just the load the page long enough for the script to start. However, currently it's loading each URL (currently 11) one at a time. I need to load all of them simultaneously. I understand I need to use curl_multi_*, but I haven't the slightest idea on how cURL functions work, so I don't know how to change my code to use curl_multi_* in a while loop. So my questions are: How can I change this code to load all of the URLs simultaneously? Please explain it and not just give me code. I want to know what each individual function does exactly. Will curl_multi_exec even work in a while loop, since the while loop is just sending each row one at a time? And of course, any references, guides, tutorials about cURL functions would be nice, as well. Preferably not so much from php.net, as while it does a good job of giving me the syntax, its just a little dry and not so good with the explanations.

    Read the article

  • How can I attach multiple urls to a single git remote?

    - by deterb
    I'm currently using git on windows through a combination of msysgit and Cygwin. I have a laptop which I move around quite a bit, so it's not on a consistent location. Unfortunately, I don't have a consistent name for it due to the computer name not being resolved on all of the locations I connect to, so I can't just use the computer name as the host for the url (e.g. git://compname/repo), so I have to use the IP address. Is there a way I can add multiple urls to pull from for a particular remote? I've seen git remote set-url --add [--push] <name> <newurl> as a way to add multiple URLs to a remote, and I can see the updates in the .git/config file, but git only tries to use the first one. Is there a way to get git to try to use all of the urls? I've tried both git fetch and git remote update, but neither tries anything after the first url. Note that I haven't tried this on linux yet, and I can't fix the computer name resolution as this is at work. Thanks

    Read the article

  • How does this decorator make a call to the 'register' method?

    - by BryanWheelock
    I'm trying to understand what is going on in the decorator @not_authenticated. The next step in the TraceRoute is to the method 'register' which is also located in django_authopenid/views.py which I just don't understand because I don't see anywhere that register is even mentioned in signin() How is the method 'register' called? def not_authenticated(func): """ decorator that redirect user to next page if he is already logged.""" def decorated(request, *args, **kwargs): if request.user.is_authenticated(): next = request.GET.get("next", "/") return HttpResponseRedirect(next) return func(request, *args, **kwargs) return decorated @not_authenticated def signin(request,newquestion=False,newanswer=False): """ signin page. It manage the legacy authentification (user/password) and authentification with openid. url: /signin/ template : authopenid/signin.htm """ request.encoding = 'UTF-8' on_failure = signin_failure next = clean_next(request.GET.get('next')) form_signin = OpenidSigninForm(initial={'next':next}) form_auth = OpenidAuthForm(initial={'next':next}) if request.POST: if 'bsignin' in request.POST.keys() or 'openid_username' in request.POST.keys(): form_signin = OpenidSigninForm(request.POST) if form_signin.is_valid(): next = clean_next(form_signin.cleaned_data.get('next')) sreg_req = sreg.SRegRequest(optional=['nickname', 'email']) redirect_to = "%s%s?%s" % ( get_url_host(request), reverse('user_complete_signin'), urllib.urlencode({'next':next}) ) return ask_openid(request, form_signin.cleaned_data['openid_url'], redirect_to, on_failure=signin_failure, sreg_request=sreg_req) elif 'blogin' in request.POST.keys(): # perform normal django authentification form_auth = OpenidAuthForm(request.POST) if form_auth.is_valid(): user_ = form_auth.get_user() login(request, user_) next = clean_next(form_auth.cleaned_data.get('next')) return HttpResponseRedirect(next) question = None if newquestion == True: from forum.models import AnonymousQuestion as AQ session_key = request.session.session_key qlist = AQ.objects.filter(session_key=session_key).order_by('-added_at') if len(qlist) > 0: question = qlist[0] answer = None if newanswer == True: from forum.models import AnonymousAnswer as AA session_key = request.session.session_key alist = AA.objects.filter(session_key=session_key).order_by('-added_at') if len(alist) > 0: answer = alist[0] return render('authopenid/signin.html', { 'question':question, 'answer':answer, 'form1': form_auth, 'form2': form_signin, 'msg': request.GET.get('msg',''), 'sendpw_url': reverse('user_sendpw'), }, context_instance=RequestContext(request)) Looking at the request, it seems that account/register/ does reference the register method with 'PATH_INFO': u'/account/register/' Here is the request: <WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'username': [u'BryanWheelock'], u'email': [u'[email protected]'], u'bnewaccount': [u'Signup']}>, COOKIES:{'__utma': '127460431.1218630960.1266769637.1266769637.1266864494.2', '__utmb': '127460431.3.10.1266864494', '__utmc': '127460431', '__utmz': '127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)', 'sessionid': 'fb15ee538320170a22d3a3a324aad968'}, META:{'CONTENT_LENGTH': '74', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'DOCUMENT_ROOT': '/usr/local/apache2/htdocs', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8', 'HTTP_CACHE_CONTROL': 'max-age=0', 'HTTP_CONNECTION': 'close', 'HTTP_COOKIE': '__utmz=127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=127460431.1218630960.1266769637.1266769637.1266864494.2; __utmc=127460431; __utmb=127460431.3.10.1266864494; sessionid=fb15ee538320170a22d3a3a324aad968', 'HTTP_HOST': 'workproject.com', 'HTTP_ORIGIN': 'http://workproject.com', 'HTTP_REFERER': 'http://workproject.com/account/signin/complete/?next=%2F&janrain_nonce=2010-02-22T18%3A49%3A53ZG2KXci&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2010-02-22T18%3A49%3A53Znxxxxxxxxxw&openid.return_to=http%3A%2F%2Fworkproject.com%2Faccount%2Fsignin%2Fcomplete%2F%3Fnext%3D%252F%26janrain_nonce%3D2010-02-22T18%253A49%253A53ZG2KXci&openid.assoc_handle=AOQobUepU4xs-kGg5LiyLzfN3RYv0I0Jocgjf_1odT4RR9zfMFpQVpMg&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle&openid.sig=Jf76i2RNhqpLTJMjeQ0nnQz6fgA%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItxxxxxxxxxs9CxHQ3PrHw_N5_3j1HM&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOaxxxxxxxxxxx4s9CxHQ3PrHw_N5_3j1HM', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.7 Safari/532.9', 'HTTP_X_FORWARDED_FOR': '96.8.31.235', 'PATH': '/usr/bin:/bin', 'PATH_INFO': u'/account/register/', 'PATH_TRANSLATED': '/home/spirituality/webapps/work/spirit_app.wsgi/account/register/', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '59956', 'REQUEST_METHOD': 'POST', 'REQUEST_URI': '/account/register/', 'SCRIPT_FILENAME': '/home/spirituality/webapps/spirituality/spirit_app.wsgi', 'SCRIPT_NAME': u'', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': '[no address given]', 'SERVER_NAME': 'workproject.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.0', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache/2.2.12 (Unix) mod_wsgi/2.5 Python/2.5.4', 'mod_wsgi.application_group': 'www.workProject.com|', 'mod_wsgi.callable_object': 'application', 'mod_wsgi.listener_host': '', 'mod_wsgi.listener_port': '25931', 'mod_wsgi.process_group': '', 'mod_wsgi.reload_mechanism': '0', 'mod_wsgi.script_reloading': '1', 'mod_wsgi.version': (2, 5), 'wsgi.errors': <mod_wsgi.Log object at 0xb7ce0038>, 'wsgi.file_wrapper': <built-in method file_wrapper of mod_wsgi.Adapter object at 0xb7e94b18>, 'wsgi.input': <mod_wsgi.Input object at 0x999cc78>, 'wsgi.multiprocess': True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>

    Read the article

  • python Requests login to website returns 403

    - by Jeff
    I'm trying to use requests to login to a website but as you can guess I'm having a problem here's the the code that I'm using import requests EMAIL = '***' PASSWORD = '***' URL = 'https://portal.bitcasa.com/login' client = requests.session(config={'verbose': sys.stderr}) login_data = {'username': EMAIL, 'password': PASSWORD,} r = client.post(URL, data=login_data, headers={"Referer": "foo"}) print r and if I print out r.text I get <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html lang="en"> <head><script type="text/javascript">var NREUMQ=NREUMQ||[];NREUMQ.push(["mark","firstbyte",new Date().getTime()])</script> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <meta name="robots" content="NONE,NOARCHIVE"> <title>403 Forbidden</title> <style type="text/css"> html * { padding:0; margin:0; } body * { padding:10px 20px; } body * * { padding:0; } body { font:small sans-serif; background:#eee; } body>div { border-bottom:1px solid #ddd; } h1 { font-weight:normal; margin-bottom:.4em; } h1 span { font-size:60%; color:#666; font-weight:normal; } #info { background:#f6f6f6; } #info ul { margin: 0.5em 4em; } #info p, #summary p { padding-top:10px; } #summary { background: #ffc; } #explanation { background:#eee; border-bottom: 0px none; } </style> </head> <body> <div id="summary"> <h1>Forbidden <span>(403)</span></h1> <p>CSRF verification failed. Request aborted.</p> </div> <div id="explanation"> <p><small>More information is available with DEBUG=True.</small></p> </div> <script type="text/javascript">if(!NREUMQ.f){NREUMQ.f=function(){NREUMQ.push(["load",new Date().getTime()]);var e=document.createElement("script");e.type="text/javascript";e.src=(("http:"===document.location.protocol)?"http:":"https:")+"//"+"d1ros97qkrwjf5.cloudfront.net/42/eum/rum.js";document.body.appendChild(e);if(NREUMQ.a)NREUMQ.a();};NREUMQ.a=window.onload;window.onload=NREUMQ.f;};NREUMQ.push(["nrfj","beacon-1.newrelic.com","0e859e0620",778660,"ZAZRbUcHWBAHURFYX11MdUxbBUIKCVxKVVpSDVRWGwtfBwJeAEZRQQYdWkYUUFklQRdXZloGRHRcAlIPA0UEQ1UdE0FWVgNFEDlEDFRH",0,7,new Date().getTime(),"","","","",""])</script></body> </html> They're using a combination of django and pyramid. I've been playing around with this for about two days now but, obviously, have gotten nowhere. Thanks for your help.

    Read the article

  • why my code print this when i read and write..

    - by zjm1126
    def sss(request): handle=open('b.txt','r+') handle.write("I AM NEW FILE") var=handle.read(); return HttpResponse(var) urlpatterns = patterns('', ('^$',sss), ) 1.my b.txt has nothing 2.when i run my code ,it print this : I AM NEW FILE7 ??; ??x ??v1?pZ€0 ?????8N??p? ? ?) ? ?`16?? S6??? ?? ?@ ??p? {?€1?? V ?? @+ ? ? ? ? ???`? >?) ???@? Z!x`%?p??? ?????@?`7???`? ? ???1X ??????#????€0?(Q??H??P?#? ' ?(5 ?, 7??6H? 0??+?? k%8? `? ??"?` ?? ?0?? ?????/? ????8S1`?`????0? ?`????? ?? ?? ?????@]?HE,????+?+???p? @O??? ?? 37€P6?7?@= ?? ? ?+xP?x???70? ?????? €???€ h *??x ?1???€K ? ??8? ?? ?? ?`?? @?? ????? ?€????????8(?P? ??? p(0B????????? ???? P???? ?/?+?? 9 ? ? ????1???????? ; ?€??€? `?(??? ??+ ??0?? ????6 ?1?,??? {0??? X??€D ??&?€?`? ?H{ ???Xw???? ?? ??0?0?)€Q ?? ?? ? @?????? ?XA6??? O ?0 h ?? ??? ? ? j????0? 57?7@?H+ ?? ? `?? 18? ?P ??6?0????6?? ?a ?` ????????? pG8s???@ ? ? (, ? ( ?? ?+&?????7??!0[ 0m ????@??0?????? ??? p?pZ?+?@?€\1?? 0? ?? ??? ?€;?? ??`? ? ? ????*`7?@? 6 R ?????p?????00^#? ??8? h €,h? ? ??x+ ??€37????`+?P?? 1 ? ?????*??6?? ??h: ??83 ? ????0s ????? ?p? ??????" s?( ??x Q s l??x ndies". * If value is 1, cand{{ value|pluralize:"y,ies" }} displays "1 candy". * If value is 2, cand{{ value|pluralize:"y,ies" }} displays "2 candies". u ,i u i ( RE RG R5 R3 R4 ( R R< R t singular_suffixt plural_suffix( ( s? D:\Python25\lib\site-packages\django\template\defaultfilters.pyt pluralize4 s$ c C s d d k l } | | ? S( sD Takes a phone number and converts it in to its numerical equivalent.i????( t phone2numeric( Rc R ( R R why? thanks

    Read the article

  • Should my web server add trailing slashes to URLs or remove them?

    - by jnm2
    I've read in several places that you should decide whether you want trailing slashes or no trailing slashes in your site URLs and stick to your choice consistently. This makes sense, but which should I pick? Is there a convention I ought to follow, performance to be gained, or is it totally up to my taste? (I notice that the StackExchange sites link without trailing slashes, but SE doesn't redirect if you add a slash.)

    Read the article

  • How to let mod_wsgi only handle certain URLs under Apache?

    - by Frederik
    I have a Django app that handles "/admin/" and "/myapp/". All the other requests should be handled by Apache. I've tried using LocationMatch but then I'd have to write a negative regex. I've tried WSGIScriptAlias with the /admin/ prefix but then the wsgi_handler receives the request with the /admin/ part cut off. Is there a cleaner way to make mod_wsgi only handle certain requests?

    Read the article

  • Firefox: Where does firefox store the opened windows/tabs/urls on Crash for Restoring?

    - by jens
    Hello in which location and file does firefox save, the last windows I had opened (when firefox crashed). I have a complete "hot dump" copy of a file system and need to restore the state firefox was when the system crasehd but I cant not restore the full backupitself. I can only extract the files of firefox, but I do not know in which files i have to search for the urls that were last opened when the snapshop of the whole filesystem was done. thanks!!!

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I want to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I've searched for solution for 17 hours straight and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offend anyone, I am truly sorry and I apologize. Thanks in advance. And sorry for such a long post.

    Read the article

  • Where does Firefox store the opened windows/tabs/urls for Session Restore after a crash?

    - by jens
    In which location and file does Firefox save the last windows I had opened (when Firefox crashed)? I have a complete "hot dump" copy of a file system and need to restore the state Firefox was when the system crashed, but I cannot restore the full backup itself. I can only extract the files of Firefox, but I do not know in which files I have to search for the urls that were last opened when the snapshot of the whole file system was done.

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • for a blog with posts and categories what are all the best ways to create user friendly and seo friendly urls

    - by Jayapal Chandran
    I am creating a module in my website which displays ringtones. it is like creating blog posts and categories It will have categories(tags) and posts. (i am using category and tag interchangeably) i am using the following linking for this module sitename.com/blog sitename.com/blog/category/category-name-slug/ - will list all ringtones of that category/tag sitename.com/blog/title/name-slug-of-the-ringtone/ - this will display the details and a download link in all page at the left i display the category/tag . This is how i have formed the url structure. it will be user friendly i hope yet will it be seo friendly? Please hint if i am missing something or other ways to improve. meanwhile i am browsing the net to get more information on linking content (categorizing) and to find best ways for the user and search engine.

    Read the article

  • Why is a # sign added to the end of URLS?

    - by Niro
    Note: I'm asking this from the perspective of the site developers (trying to help someone there). not as a user. Please don't forward this to superuser.com. It's a server admin question. Have a look here http://www.wanimo.com/fr/chiens/coussin-matelas-tapis-pour-chien-sc28/tapis-plat-urban-chic-sf7263/ you'll see that the page gets redirected to the same page with # at the end. Worse, when you click back you get garbage url. I'm trying to debug what is causing the redirect. Any advice on how to find it ?

    Read the article

  • Should I add a "nofollow" attribute to download links, or disallow the URLs in robots.txt?

    - by Laurent
    I have a download link very similar to Opera's one - it's just a script that sends the file. It doesn't have an extension and there's no obvious way to tell that it's actually a download link. So since I don't want robots to crawl this link, do I need to add it to robots.txt or maybe add a "nofollow" attribute to it? I see that on Opera's website they didn't do either of this, so perhaps it's not necessary?

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • How can I test for a URLs existeance before redirecting to it?

    - by ckliborn
    I am using Apache's mod_rewrite to redirect mobile users to my mobile site based on their http_user_agent. However not all pages have a mobile equivalent. Also mobile pages end in .html and "full" pages end in .shtml. Here is some pseudo code. Does the user have a certain HTTP_USER_AGENT? Is there a mobile page? If so take them there. If not, no redirection is needed. I want to do this with apache.

    Read the article

  • .htaccess rules to rewrite URLs to front end page?

    - by Dizzley
    I am adding a new application to my site at example.com/app. I want views at that URL to always open myapp.php. E.g. example.com/app -> example.com/app/myapp.php and example.com/app/ -> example.com/app/myapp.php What's the correct form of rewrite rules in the .htaccess file? I've got: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /app/ RewriteRule ^myapp\.php$ - [L] RewriteRule ^myapp.php$ - [L] RewriteRule . - [L] </IfModule> ...based on what the Wordpress front-end does. But all I see at example.com/app is a directory of files. :( (I put those rewrites at the top of my .htaccess file). Any ideas? Update What actually worked: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/app(/.*)?$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /app/myapp.php [L] This is good because: Explicit or implicit calls to app/myapp.php work. example.com/app redirects to app/myapp.php example.com/app/ redirects to app/myapp.php example.com/app/subfunction redirects to app/myapp.php All other calls to example.com/otherstuff are untouched. Item 4 is Wordpress-like Front Controller pattern behaviour. I think that rule RewriteCond %{REQUEST_URI} ^/app.*$ [NC] needs refining as it allows /app-oh-my-goodness etc. through too. Thanks for the answers.

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Is it considered duplicate content when search results can be retrieved via 2 different urls? [closed]

    - by Floran
    Possible Duplicate: What is duplicate content and how can I avoid being penalized for it on my site? I'm building up friendly url's like so: http://www.1001locaties.nl/trouwlocaties But the same content can also be viewed when using the filter options on the left side, but via a different url: http://www.1001locaties.nl/locaties/?search=1&category=Trouwlocaties Is this considered duplicate content by Google? And if so: what can I do about it?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Are there advantages of using hard coded URLs for localization?

    - by nbolton
    On the Synergy website, localization is detected (and can be overridden) but uses the same URL for all languages. Some websites however, like Wikipedia have language specific subdomains. What are the advantages of having either subdomains or subdirectories (i.e. a specific URL) for each language localization? Also, should it automatically redirect the user to the specific subdomain/subdir based on the language that the browser requests? I suspect that there are advantages, which I'm guessing are: When the website appears in search results for non-English languages, the translated page description will be shown (assuming there is a translation provided by the website). When a user shares a page (e.g. through twitter), it will show in a specific language. Perhaps this is a disadvantage though? Am I correct, if so, are there more advantages?

    Read the article

  • Why do some urls in Firefox change when copy / paste?

    - by user203748
    This may not be a Firefox / Ubuntu specific issue. When I Copy / Paste a web link with _ and ( ) it is rendered as %20, %28, and %29. Yet in the Firefox URL these % symbols do not appear. The %20 is particularly weird because the _ itself does render in the URL: https://www.capitalsecuritybank.com/en/PDF/CSB_%20Account_%20Application_%20Form_%20%28Personal%29.pdf Can anyone explain why the URL is different when Copied / Pasted?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >