Search Results

Search found 13195 results on 528 pages for 'technical trainer pro'.

Page 206/528 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • SEO Question - allintitle with or without quotes

    - by Aaron
    I'm trying to learn more about implementing basic SEO strategies and have been spending a lot of time refining my keywords using Google Analytics combined with manually checking them using Google's allintitle operator. However, I'm unclear on whether I should be using quotes with my allintitles. Example: allintitle: seo tips and tricks for beginners 191 results allintitle: "seo tips and tricks for beginners" 70 results My thought is that it would be more accurate to use it without quotes because that way you get a more well rounded idea of all those you are competing with. So, my question is does Google give more weight to exact matches in the title tag or does that not really matter? If someone searched for: seo tips and tricks for beginners, would they be more likely to see the ones that have that exact phrase in their title tag or does that not have any impact?

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • Is there a way to disallow crawling of only HTTPS in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • Problem with missing JSON functions on PHP 5.2.6 / Plesk 8.4

    - by Drachenviech
    I have a vserver running openSuse 10.3, Apache 2 and Plesk 8.4. I can update/upgrade neither, as it is apparently not recommended to upgrade openSuse 10.3 (and an update to the EOL 10.4 does not seem to make much sense) and Plesk fails to update no matter what version I try (even fails to upgrade to 8.4.1). Still I can live with that somehow, primarily because I don’t have the time to do a fresh remote install on the vserver. What really is a problem is, that though the installed PHP is 5.2.6 it has no zip library and no json functions. The first is probably because PHP was not compiled with --enable-zip. The second is a big mystery though. As I understand it, it always comes with PHP unless its compiled with the --disable-json configure option. This is however not the case. And the json extension module is just not there. I even tried to enable it with extension=json.so with no luck either. the configure options of my PHP are (as shipped with Plesk 8.4) '../configure' '--prefix=/usr' '--datadir=/usr/share/php5' '--mandir=/usr/share/man' '--bindir=/usr/bin' '--with-libdir=lib' '--includedir=/usr/include' '--sysconfdir=/etc/php5/apache2' '--with-config-file-path=/etc/php5/apache2' '--with-config-file-scan-dir=/etc/php5/conf.d' '--enable-libxml' '--enable-session' '--with-mm' '--with-pcre-regex=/usr' '--enable-xml' '--enable-simplexml' '--enable-spl' '--enable-filter' '--disable-debug' '--enable-inline-optimization' '--disable-rpath' '--disable-static' '--enable-shared' '--program-suffix=5' '--with-pic' '--with-gnu-ld' '--with-system-tzdata=/usr/share/zoneinfo' '--with-apxs2=/usr/sbin/apxs2' '--disable-all' '--disable-cli' As I understand it, PECL is not an option with 5.2.6. Or am I mistaken? Even if I was not, the openSuse repository only goes as far as PHP 5.2.4. The openSuse install even came without zypper, which I had to manually install. So is there a way to get ziplib and json running in PHP 5.2.6 without having to recompile the binary?

    Read the article

  • Where to find PHP version usage stats?

    - by Darren Cook
    My original question was: what percentage of sites are using php 5.4.x? (As it has some very interesting new features.) With secondary questions like how many of the cheap web hosting places have upgraded, which versions of the linux distros include it, etc. But I'm coming up blanks. http://php.net/usage.php stops at July 2007, and the nexen.net website seems to have stopped in 2008. At SecuritySpace they only list the web servers, not php versions. The TIOBE link isn't what I'm after (it doesn't -- and couldn't -- break down by version number. I thought php.net might show download numbers, but I cannot see them anywhere. I kind of answered the distro question, but it requires a lot of clicking around at distrowatch.com. E.g. I see here that Ubuntu offers php 5.4.6 in the latest snapshot, but the latest release (Ubuntu 12.04) has 5.3.10.

    Read the article

  • Joomla: Prevent certain user from visiting a certain page

    - by user6657
    I have an internal section in my Joomla only for "Registered" users. There they can edit preferences. Multiple users can access the same preferences (different stakeholders). Now I have "smithcorp" and "smithcorp_guest". They should both be able to view the same things, but "smithcorp_guest" shouldn't be able to edit the preferences. Is there a way to forbid "smithcorp_guest" access to this page? I have only seen access regulation via the user level, i.e. unregistered, registered, admin. Thanks, MrB

    Read the article

  • Is there a media player that works on HTTPS sites?

    - by Iain Hallam
    I'm currently using Yahoo! Media Player for a site that needs to play MP3 files that are stored on our server. In total, there's quite a bit more than the free limits at Soundcloud, but each file is only a few minutes long. YMP is pretty good, but causes security warnings on HTTPS pages, because it can only be served via HTTP. Is there an equivalent free player I can embed for the HTTPS pages? EDIT: Just to clarify, I'm initially looking for something that will scan the page and turn media links playable.

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • Which folders need to be backedup for migration in Joomla?

    - by Devdatta Tengshe
    I have never used Joomla before, so this might be a noobish question. I have tried searching, but haven't found anything relevant. I'm helping someone update & migrate their old website, built on the Joomla framework. Currently it is running on Joomla 1.5.8 which is an ancient version. I've convinced them to upgrade Joomla to atleast 2.5 I have already taken the database backup. Most links I have seen talk of backing up the entire public_html folder (The website runs on a shared host). But in my fresh Joomla installation there are several folders that are in the public_html folder. So which of the folders in the public_html folder are from the content of the website, and which are of the old Joomla framework? I'm afraid that I might overwrite files of the new Joomla framework with the old framework, if copy all the files and folders into the new installation.

    Read the article

  • Site overthrown by Turkish hackers...

    - by Jackson Gariety
    Go ahead, laugh. I forgot to remove the default admin/admin account on my blog. SOmebody got in and has replaced my homepage with some internet graffiti. I've used .htaccess to replace the page with a 403 error, but no matter what I do, my wordpress homepage is this hacker thing. How can I setup my server so that ONLY MYSELF can view it while I'm fixing this via .htaccess? What steps should I take to eradicate them from my server? If I delete the ENTIRE website and change all the passwords, is he completely gone? Thanks.

    Read the article

  • What to learn for a pure practical developer to get better?

    - by ChrisRamakers
    I'm a self taught developer that currently has more than enough experience to hold up against my colleagues waving with their degrees, yet I feel that I'm lacking some important skills to advance further into being a senior level professional in a leading role. More specific in the engineering, planning and designing aspect of software. I've touched the surface of UML, ERM/ERD, have experienced both waterfall and scrum projectmanagement, ... yet I feel there is something missing as every time I start on a new project I don't know where to begin. Should I start diagramming and how? should I start writing an xx page document describing the project on a technical level first, should I dive head first into writing the first tests and code or pseudo-code? I would like to know what, in my case, would be the best way forward, to learn how I can tackle this problem in the future and get better at leading and starting a project. There is not much i don't know about my technical tools and languages but when it gets abstract i'm in trouble.

    Read the article

  • ISP issue browsing "sonos.com" - need to diagnose and prove [closed]

    - by john
    I am unable to browse to a website "sonos.com" with my ISP (virgin). I have ruled out browsers, PCs, macs, routers, wifi, etc. Other ISPs (even other virgin connections in different areas!) supply this site no problem. I am 99% convinced there is a DNS issue lurking here. There is something fishy about the DNS for the site : What I notice is that online DNS sites tell me the right IP address for "sonos.com", but not for "www.sonos.com". Anyway when I type "sonos.com" the browser (any/all of the 4 I tried) fail to display the page. Firefox gives a "connection was reset" error. If I browse to sonos.com using the IP address it works OK. Browsing to www.sonos.com or sonos.com works fine with other ISPs of course. Questions: 1 Does anyone have any idea what might be going on here? 2 Any suggestions as to tools/monitors to help investigate/prove what is going on I can then take this up with virgin and/or sonos. Thanks

    Read the article

  • Strategy for hosting 700+ domains names, each with a static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to manage CNAMEs on the registrar-side, but is there a way to quickly associate CNAMEs with new websites (on the hosting side), maybe using the method offered by gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • How important is the uniqueness of your domain name?

    - by Corey
    I've finally come up with a domain name that I like and is available. The name is nonsensical and doesn't translate into anything meaningful in any language, as far as I know. It's something like "FOOBARite". (Don't steal that!) I'm wondering about a few search issues. Results-wise, searching for it in Google currently returns about 15k results, none of which are relevant (dead Twitter pages, various unpopular online handles, and botched french translations). However, Google starts off with a spelling suggestion, which removes a letter. ("Did you mean: FOOBARit?") That returns about 250k results for several different and unrelated websites/organizations by that name. One is some technology provider, another is a sign-language organization, another is the name of a font... None of them seem particularly popular, there's not that much activity on any of those pages. Anyway, the two are pronounced differently, they're just a letter off. Should I go with my idea or is this one-letter variation going to cause me problems? If my site becomes ranked well enough, will Google's spelling suggestion go away? I don't want users to search for my site name and be told they've spelled it wrong.

    Read the article

  • Covering Yourself For Copyrighted Materials [on hold]

    - by user3177012
    I was thinking about developing a small community website where people of a certain profession can register and post their own blogs (Which includes an optional photo). I then got to thinking about how people might use this and the fact that if they are given the option to add a photo, they might be likely to use one that they simply find on Google, another social network or even an existing online blog/magazine article. So how do I cover myself from getting a fine slapped on me and to make it purely the fault of the individual uploader? I plan on having an option where the user can credit a photo by typing in the original photographers name & web link (optional) and to make them tick a check box stating that the post is their own content and that they have permission to use any images but is that enough to cover myself? How do other sites do it?

    Read the article

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • Why Wikipedia doesn't appear as a referral in Google Analytics' Traffic sources?

    - by Rober
    One of my clients has a website and got not spammy backlinks in a Wikipedia article. When I test it for SEO purposes with Google Analytics (from different IPs), apparently there is no referral information. On the Real-Time view my test visit is visible but with There is no data for this view in the referrals subview. And this visits appear as (direct) / (none) on the Traffic sources view. Wikipedia is not hiding in any way its links origin, since it is shown in the server visits log. Is Google ignoring Wikipedia as a referral? Am I missing anything else? Update: Now it works, several days after the link was active. Maybe something is detecting for how long the link was there so that it doesn't work just from the beggining, as a security measure? Many visits are actually not recorded.

    Read the article

  • Optimize SEO: 2 websites or 1 main website and subdomain? [duplicate]

    - by waanders
    This question already has an answer here: Subdomain versus subdirectory 4 answers I'm working on a WordPress website of a little company, let say: www.xxx.com. Now they want a different website for their workshops. What is the most optimal construction thinking of SEO? 1) www.xxx.com + www.xxx-workshops.com 2) www.xxx.com + www.xxx.com/workshops 3) www.xxx.com + workshops.xxx.com

    Read the article

  • Aliasing resellers domain to primary domain

    - by Ashkan Mobayen Khiabani
    I have designed a website that accepts re-sellers and actually the concept of this website is having local re-sellers for each province (or should we say branches). I have designed this website in a way that anybody who has a domain, can point to our website (a record or cname). well most of the website content are the same, the only difference is that re-sellers website doesn't have some items on the main menu and may have some small descriptions of their own branch in some pages. I read that Google may ban websites with duplicate content (or which are significantly similar). I want to know will this be a problem for me? If yes, what else can I do? we have had considered asking our reseller to use iframe that loads our website but wanted that each reseller can have its own SEO and try harder but what I read about this duplicate thing worries me.

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • Sharing password-protected videos on social media

    - by PaulJ
    We are developing a site where users will be able to watch and download videos that they've recorded of themselves in a public event. The videos will be password protected, and will be available only to users who have paid for them at the event... ...But on the other hand, we also want users to share those videos on social media, since they will be an attractive publicity for our events. Having people log into our site with their password, download the video and then re-upload it to Youtube/Facebook will be too cumbersome, and I suspect that few users will be willing to do that. So the obvious alternative is to have one of those convenient "share" buttons, but the problem with that approach will be that: The video will be physically hosted (and linked to) in our site. What happens if those videos go viral and our bandwidth cost explodes? The video is password protected. The solution I've thought of for this is: Upload the user's video to our (password-protected site) and to Youtube at the same time, as an unlisted video. The user can access our site with his password and download his video (to watch on his TV or whatever). If the users hits the "share" button, we show him the Youtube link... and we turn the video into a listed one. This seems in line with the ideas in Using YouTube as a CDN, and there didn't seem to be any objections in that question. I'm posting this just to confirm that my idea doesn't violate any Youtube TOS, and also to see if it is a good one or there might be better alternatives.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >