Search Results

Search found 9727 results on 390 pages for 'llblgen pro'.

Page 243/390 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Do I really need to remove special characters in a URL?

    - by anarchoi
    I have an FTP account shared with friends where we upload underground music albums and then we use the links to share the downloads in a music forum. The problem is that the album names are in french so there is a lot of special characters in the name. So the URL looks like http://www.mydomain.com/downloads/Some Band - En français avec des caractères spéciaux (2013) [7'' EP].zip For me it works perfectly and I can download the file by using this URL, but I have read everywhere that special chars are bad in URL. Is there any reason why I must remove the special characters or encode the URL? Is everyone able to access a URL with special characters or will some older browsers not be able to download the files? I really don't care about SEO or anything else. I just want the download links to work for everyone. Since the files are uploaded through FTP, I can't use PHP to remove the special characters with a regex, so I really don't know what to do.

    Read the article

  • Facebook likes reset after moving to HTTPS

    - by aarondicks
    I've got a question regarding the Facebook like button. We worked on a piece recently that embeds a number of social share buttons (please see the source code below). When the piece was released, it was on HTTP, and received over 2k likes (the URL 'slug' hasn't changed at all). The site was recently migrated to permanent-on HTTPS, and the like data has been reset, and we've been left with 50 new, recent likes. If you see in the source code, the URL is set explicitly to like the HTTP version, which I believe to be correct. Can anyone help me work out what's happened here? Here's the HTML bit of the like button: <div class="fb-like" data-href="http://www.harveywatersofteners.co.uk/history-interior-design" data-layout="box_count" data-action="like" data-show-faces="false" data-share="false"></div>

    Read the article

  • IP blocked because: (smtpauth) Failed SMTP AUTH login from - can someone explain?

    - by gdaniel
    Today I had a few users blocked in our server firewall because of: (smtpauth) Failed SMTP AUTH login from Can someone explain the reason? What does it exactly mean? Could someone be using the our website to access SMTP for spamming purpose? UPDATE: Server info: Centos OS with CPanel and WHM. However no one has access to either. Taking a look at the logs it looks like someone repetitively attempted to login with a known existent user/pass.

    Read the article

  • Profiling Script [closed]

    - by Abhirup Manna
    Possible Duplicate: I need an open source php social script, minus the social features I am looking for a web script in which users can register and create a simple profile page; something like a CV. When users log in they will be able to fill out a simple form with details like name, dob, etc. and it displays in a simple fashion on the profile page like domain.com/username Is there any simple script available for this maybe free or open sourced? An example is prairie identity server (openID). I need something similar but more customizable.

    Read the article

  • What is the best taxonomy from Google's perspective?

    - by ZakGottlieb
    I was wondering what the best way is to structure a new website in Google's eyes. Currently, it contains two top-level categories (X & Y), and clicking a term under either one will result in the URL: www.nameofsite.com/X/X type term, or /Y/Y type term Technically, it is correct to group all "X type terms" under X and "Y type terms" under Y, but we could probably be more granular and break all articles into 5-6 top-level categories by breaking Y up into more specific categories. Given that the current URL structure will eventually result in 1000's of "X type terms" and "Y type terms" under just two top-level categories, would it be more advisable to have several of these, as suggested? Thank you in advance.

    Read the article

  • Is it possible to grant access to a folder in a SVN server using SVN's API?

    - by Splendonia
    I need to develop a web application (Using any language but I'm familiar with Frameworks Symfony2 and Rails), that is able to grant access to a user to a determined folder on another server on the same network from the application's front-end. I found out that SVN has an API and that I could interact with it with PHP or Ruby (Apparently), although I would be willing to program the application on another language, the server where the files are stored is using Windows and I thought on using Virtual SVN server, however I can't find any function on the API to grant users access to files and/or folders or access of any kind, like you usually do using the GUI (VirtualSVN on Windows). Am I missing anything? Is this even possible?

    Read the article

  • How can you identify duplicate CSS rules?

    - by DanMan
    I'm not talking about used/unused here. I have two stylesheets and some rules differ (either in selectors, rules or both) and some are exactly the same. So I'm looking for a way to extract and move those rules, which are the same in both files, into a third stylesheet. In other words: an intersection of two stylesheets. Strangely, I couldn't find a software for this. Would have expected this to be a more common problem. Background, for those who care: I'm converting a desktop website into a mobile one and I've started by duplicating the desktop stylesheet and changing it (throwing stuff out, adding to it).

    Read the article

  • On which page(s) to add canonical?

    - by user6211
    I have two pages with same content and same meta title and meta description. they also have very simular url: http://www.mysite.com/new-york http://www.mysite.com/new_york I need first link to be "official". To avoid having duplicated pages, i want to add canonical meta tag in header... but on which page? does it have to be on both of them or only on second? On on first? Can you give me some advice please?

    Read the article

  • Can I host my website through gitlab like you can with github pages?

    - by BenWatkinsArt
    I would love to use git to host my website, and would love a platform I can log into online to go along with it (something like Github). You would think in which case, that Github pages would be the perfect route for me, though I don't want to use Github pages. I would like to host this all on my own servers like you do with Github enterprise (but for free). I have found Gitlab and was wondering if there is anyway I can use Gitlab like Github pages. Is it possible?

    Read the article

  • VPNs - The ins and outs of IPSec & VPNs in general [on hold]

    - by Magus
    I have to decided to mess around with VPNs on my home router, to access a couple of servers in the back room of my house, however, I went into this thinking happy thoughts and easy-peasy-lemon-squeezy... Now I feel like doing drugs, as if maybe that will help me understand the myriad of terms which come with this nifty little tool. Basically, I do understand WHAT a VPN is, but I have no idea how to set one up. I have a Cisco router ( will supply more info if needed ), and would prefer to use IPSec for this ordeal. I'd like to have the following terms explained ( and yes, I have used the famous Google to help, close but no cigar ) : "Local Secure Group", "Remote Secure Group", "Remote Secure Gateway", how different types of "Keys" work ( again, I know the basics ), and for the "Add VPN Config" screen on the connecting device: "Server, does it have to be an address, or just an IP?", "Account; is this the tunnel name?", I am going to assume 'Password' means the Key, "Group Name; or is THIS the tunnel name?", and "Secret; I halfish know what this is..." I would really appreciate any contribution made, no matter how small, even if it includes a redirect. I just want to learn. Thanks in advance! Magus

    Read the article

  • List of SEO tools [on hold]

    - by Felix
    I'd like to crowdsource a list of SEO platforms/tools. There is an abundance of options out there... Analytics SEO Blueprint BrightEdge Conductor Ginzametrics gShift Labs RankAbove Raven Internet Marketing Tools Rio SEO Searchmetrics seoClarity SEOlytics SyCara For each tool suggestion, please provide a brief overview of what the tool is used for and what differentiates it from its competitors.

    Read the article

  • CSS padding displays in FF and Chrome, but not in IE 8? [migrated]

    - by bullitt five
    I'm updating the CSS on a page design, trying to put borders around my images, with 7px of padding between the image and the border. It seems to be working fine in Firefox and Chrome, but IE displays the border directly against the image, with no padding. Any suggestions? CSS code: img.right { float: right; margin: 0px; border: 1px solid #999; padding: 7px; margin-left: 10px; margin-bottom: 5px; } HTML: <img src="images/homepage_challengecoin.jpg" class="right">

    Read the article

  • Is it possible to force a tab/window to take focus without an alert()? [migrated]

    - by Chris Brandt
    I've been working to solve some form / session timeout issues on our site. I'd like to get the users attention BEFORE this happens so that they don't lose data that they've entered into long forms. I'm using jquery's .get to confirm with my server when the users session is set to timeout, and warn them 5 minutes before then. I've found that if I pop an alert(), it will bring the focus back to that tab / page -- but this seems 'heavy handed'. Is there a better way to do this?

    Read the article

  • Using Ajax #! for Google but site is not being crawled any more

    - by user28231
    We used to have all links in a normal way (thus with parameters?), we changed to Ajax loading cause we installed a music player. Here's an example: Old links: http://stereofox.com/post.php?idPost=5326 New links: http://stereofox.com/post.php#!idPost=5326 The snapshot you can get it by adding _escaped_fragment_= after the ?. All of these have the same content, and this image says what has happend to the site since we changed the linking system.

    Read the article

  • Hide from google while developing

    - by user210757
    I will be building a (wordpress) web site. While I am developing, other team members will be pushing content. I'd like to have it hidden from google while under development. It will be hosted on godaddy. I have thought of not pointing the domain name to it until live and using "preview dns", or buying a static IP during development. Or hosting dev site in a sub-directory ("/dev/") until ready and then moving it up a level. If in the dev directory I'd add htaccess or robots.txt to not crawl. Is any of this a bad idea? Will google penalize for any of this - like search by IP and then associate that with the domain later on? Any better ideas?

    Read the article

  • Will uploading our .docx files on scribd and embedding the files on our website affect search engine rankings?

    - by user1439968
    We have prepared notes for university students which are on .docx format. And we want it to put on our website for viewing. We tried one option. Uploading the files on scribd and embedding it on our website for viewing on scribd viewer. Will making documents available on srcibd viewer on our website affect search engine rankings ? Will search engines treat it as duplicate content as those are already uploaded on scribd and we are embedding it on our website ? On scribd we have set the uploaded documents as 'private' though. And if it affects, can you suggest any suitable way to make .docx files to be viewed on our website that doesn't affect search engine rankings ?

    Read the article

  • .htaccess redirect question

    - by user473056
    Hi, I'm trying to set up my .htaccess file to take the displayed link and route it to the destination link as below Displayed Link http://www.my-website.com/click-4559226-10388358?url=https%3A%2F%2Fdestination-website2.com%2FItem.php%3Fid%3D44350396%26sld%3DA6D7A632-821E-4b78-ACD0-147658B77BD6 Destination Link http://www.destination-website.com/click-4559226-10388358?url=https%3A%2F%2Fdestination-website2.com%2FItem.php%3Fid%3D44350396%26sld%3DA6D7A632-821E-4b78-ACD0-147658B77BD6 Effectively, all that changes is the first url (http://www.my-website.com) everything after that is the same. Is this possible and could someone briefly explain how I would go about it? * Just to be clear, I dont want to redirect everything from my-webiste.com. Just links that start http://www.my-website.com/click-4559226-10388358

    Read the article

  • Multiple Google Analytics code for url under same domain

    - by will.i.am
    I have one domain, www.example.com, and www.example.com/sales. the analytic code on both urls are different. so when i login to google account, it will show two separate analytic accounts. on www.example.com/sales, i have a banner linked back to www.example.com. i clicked that banners, and i am sure there are other people have clicked the banner as well. but when i check the analytic of www.example.com, i don't see any thing come from my example.com/sales. I assume analytic on both urls are working, but why it doesn't track the visit from /sales. any idea??

    Read the article

  • The best way to snatch an expiring domain?

    - by SilvrSun
    There's a domain that I've been looking to acquire that is expiring on the 30th of this month. I don't think it is very popular, and the guy hasn't seemed to update the website in two years now. So, I was doing some research and came across this site that seems to review some "snatching" services, but the article is quite outdated. So, I'm wondering if anyone can offer any newer information on the topic, or whether the recommend any services for helping me acquire the site in question?

    Read the article

  • pages still show up in google search even after disallowed in robots.txt [duplicate]

    - by Jota Onasys
    This question already has an answer here: With Robots.txt disallow all, why was my site still getting traffic? 5 answers Why is it that some pages still show up in google search even though disallowed in robots.txt? Is the best solution here to remove the Disallow from Robots.txt and just add noindex, nofollow meta tag to those pages you want blocked? Or should I submit a request to Google directly to remove those pages?

    Read the article

  • When Google gives up recrawling 301 that led to 404?

    - by Easy Life
    I've transferred a domain and made a mistake in the redirects (the URL structure is identical). Even though they went to the new domain, the error caused a 404 when crawled by Google bot. 10 days after I saw and corrected my redirect mistake, and now the site should (hopefully) redirect to proper pages. Q1: The URLs of the 404 pages in the Webmaster Tools all bear the mistake and will never be available at the new site. I marked them as fixed in the tools. Do I need to do something about that, like 301 rewrite them with a condition to fix the error? Q2: Does Google bot attempt to recrawl 301 pages that pointed to a 404?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [closed]

    - by DhruvPathak
    Possible Duplicate: How to find web hosting that meets my requirements? We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • .htaccess two different rules but only one per time

    - by dragon112
    I'm rather new to the whole .htaccess thing and I'm using the following right now to use 'pretty url's': <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?path=$1 [NS,L] </IfModule> Now i found my website a bit slow and decided to start gzipping my CSS files thru a php script I found somewhere on the web (the website). For this to work I need to rewrite the url to open the correct php file. That would look something like this: RewriteRule ^(.*).css$ /csszip.php?file=$1.css [L] But I only want the first to happen when the second doesn't and vice versa. In other words i'd like something like this: <IfModule mod_rewrite.c> RewriteEngine On if request doesn't contain .css do RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?path=$1 [NS,L] else do RewriteRule ^(.*).css$ /csszip.php?file=$1.css [L] </IfModule> Can anyone help me with the proper code or a place where i can find a way to use some kind of conditional statement in htaccess files? Thanks in Advance!:)

    Read the article

  • Does Googlebot (and/or search engines) index a forwarded page? [duplicate]

    - by user2889419
    This question already has an answer here: HTTP and HTTPS impacts on SEO 1 answer Let's say I have example.com domain, and I force the user to use the HTTPS over HTTP. The question is as browsers just accept and load the forwarded/new page (when the request for http://example.com - https://example.com), does the Googlebot (or other search engines) accept the forwarded page and index the new page and just ignore the old page? In other word, does search engines accept HTTPS beside the HTTP?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >