Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 201/476 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • I am the Webmaster now. Where do I start? [closed]

    - by John C
    I just changed jobs and will soon be in charge of a custom-built ASP.NET CMS and website for a fairly large corporation with global offices. I have IT and developer FTE resources available to me but I am trying to build a list of branding, project, and functionality points to review. What guides or lists can/should I use to evaluate this website before I begin adding features, creating new projects, or even redesigning and redeveloping the site? (I have been a webmaster/designer/developer for small, WordPress/Drupal sites for 10 years. I have been an unofficial webmaster (director/content manager) for a large site for 3 years (no direct development control over Sharepoint administration, IIS, or hosting ... but everything else, I did. Analytics, email, advertising, social, SEO, etc.).) Thank you!

    Read the article

  • Terms and conditions for a simple website

    - by lonekingc4
    I finished building a website for an online chess club which I am a member of. This is my first website. The site has blogging feature so the members can log in and write blog posts and comment on other posts. The membership is limited to users of an online chess site (freechess.org) and any member of that site can join this site as well. I was wondering, is it needed to put up a terms and conditions for my new website? If so, can I have a model of that? I searched and found some models but they are all for big sites that have e-commerce etc.

    Read the article

  • How to connect to database on remote server

    - by user137263
    Where there is VPN to remote server and then access to the database via local network interface, how can one establish a remote link between one's computer (with a programme such as Visual Studio 2010) and SQL Server (e.g. 2008 R2) ? Any attempts to create a direct link to the SQL Server are blocked. Whilst the SQL Server can be configured to allow external access, this provides its own host of problems. Any help would be much appreciated.

    Read the article

  • Font displays differently in Firefox vs. Chrome

    - by Goro
    It seems that my menu bar is displayed with a different font stretch in Firefox than it is in Chrome. See the following: Here is the CSS applied to this element: font-variant: small-caps; font-size:13px; letter-spacing: 0px; font-family: Arial; font-stretch: normal; text-decoration: none; As far as I can tell everything regarding that font is exactly the same, yet they still display differently (see pic). Any ideas? Thanks,

    Read the article

  • Will adsense 'use' comments between the adsense opening and closing tags for ad targeting?

    - by SuperSpy
    I've got a simple website on which students of my school can see their timetable. I setup adsense three weeks ago and everything is working fine. But I would like to help adsense. Since only a timetable and a few instructions are shown, there is scarcely content to which ads can relate. Idea: <!- google_ad_section_start –> <!-- Students like clothing ... and bargains, etc.--> <!– google_ad_section_end –> Will this work? Or are there other ways to help adsense?

    Read the article

  • How come a keyword with 46 local monthly searches get 150 local monthly impressions?

    - by Geno Thampi
    I am doing a keyword analysis by correlating data between Google ANalytics and Google Adwords Keyword tool. So here is the confusion I check the local monthly searches (Sweden) for "SEO Packages" using the adwords keyword tool and it shows: 46 monthly searches Now I come back to GA and check the impressions for "SEO Packages" that we got from Sweden and the value is: 150 impressions. So basically: Local monthly searches in Sweden: 46 Monthly impression we got from Sweden: 150 How come we get 150 impressions out of only 150 searches?

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • Will this sitemap get me de indexed from Google?

    - by heavy rocker dude
    My site's URL (web address) is: http://quantlabs.net/private/sitemap.xml Description (including timeline of any changes made): Will this sitemap get me de-indexed from Google? My new site map just got spidered by Google for some reason. It is located at http://quantlabs.net/private/sitemap.xml, is this in danger of getting me de-indexed from Google's index. Does it look like spam even though it is not meant to be? I am trying to figure the limitation in terms of Google's threshold before they deem it a spammy sitemap. This is sitemap contains automated postings which are different with the stock symbol provided. The amount of postings within the Sitemap are quite a few in a small amount of time.

    Read the article

  • Should I use nodindex, follow or rel canonical?

    - by webmasters
    I have a site that lists offers, promotions from other websites. Since the offers expire rather quickly I don't save them into my database. I see no point in having a page from 2010 about 30% discount on a certain brand of shoes which isn't availabe anymore. A visitor enters my website; He clicks on the "shoes" category; http://www.mysite.com/shoes/ Here he sees 20 available promotions from different online stores. He clicks on a promotion and gets to a page like this: http://www.mysite.com/shoes/promotions/prada Questions: I use the template promotions.php and list all the promotions. /promotions/prada/ /promotions/otherbrand/ .... What I do is use "noindex, follow" for the links. Is that a good idea? Or should I use rel="canonical" for the promotion page? How do you advise me to handle this from the SEO point of view?

    Read the article

  • SEO: 301 for a page which has no mirrow path?

    - by Alex
    Hello, I just did a 301 and the domain and some pages which have a mirror file path are fine. But I have one directory which is not going to be part of the new site and I don't know how to redirect the old files that were there. I need something like this: oldDomain/oldDir/file.php and I need to make it redirect to newDomain/differentDir/file.php Is that possible? What is the 301 redirect rule for that? update I just added this rule as suggested by @Itai and it didn't work redirectMatch permanent ^/outdoors/trees/tanoak.php$ http://www.comehike.com/outdoors/trees/129/Tanoak any idea why?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Is there a way to disallow crawling of only HTTPS in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • 301 re-direct all external links to new domain

    - by Dean Legg
    I have changed the main domain to a sub-domain & would like to re-direct all external links to the new sub domain. Have read a few articles but having no luck editing the .htaccess as it might be interfering with all the rules in there. Old: www.example.co.uk New: https://secure.example.co.uk The current rules are quite handy because it seems to have sorted out the structure for all internal links. It has even updated the file path for images (or this could just be wordpress as the url was updated under general settings). This is the current .htaccess <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Web app to manage subscriptions to online magazine

    - by Mulone
    I'm looking for a php web app, a Wordpress plugin, or an online web service (naturally as cheap as possible) to manage the subscriptions for an online magazine. These are the main features I need: register/open new subscription renew subscription pay online with credit card send automatic emails after registration auto-send reminder when subscription expires send bulk emails to all subscribers The magazine runs on Wordpress.

    Read the article

  • Cookies Audit help

    - by Gino
    Someone can explain to me what is the purpose of these cookies? I'm doing a cookies audit and I didn't find anything on the web Domain: google.com(google maps), Name: NID Domain: google.com(google maps), Name: SNID Domain: google.com(google maps), Name: khcookie Domain: google.com(google maps), Name: PREF and Domain: tripadvisor.com, Name: ServerPool Domain: tripadvisor.com, Name: TAReturnTo Domain: tripadvisor.com, Name: TAUnique Domain: tripadvisor.com, Name: v1st Thank you very much, Gino

    Read the article

  • Does SEO optimisation count on the responsive side of a site?

    - by Rick Donohoe
    I'm looking at making some SEO optimisation fixes, and at this point I'm sorting out the heading structure and keywords - H1's, H2's etc We have a site where there are a number of similar blocks, and one is always visible, and one is hidden depending on the screen size. This is our method of making a single site responsive. Firstly, how does this technique affect the SEO, and in general does the responsive side of a site matter at all to search engines? What I mean by this is if the site has different content depending on screen sizes, then which content would the search spider crawl?

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • preview form using javascript in popup

    - by user1015309
    please I need some help in previewing a form in popup. I have a form, quite big, so I added the option of preview to show as popup. The lightbox form popup works well, but the problem I now have is function passform ()passing the inputs(textfield, select, checkbox, radio) into the popup page for preview on Click(). Below are my javascript and html codes. I left the css and some html out, because I think they're not needed. I will appreciate your help. Thank you The Javascript function gradient(id, level) { var box = document.getElementById(id); box.style.opacity = level; box.style.MozOpacity = level; box.style.KhtmlOpacity = level; box.style.filter = "alpha(opacity=" + level * 100 + ")"; box.style.display="block"; return; } function fadein(id) { var level = 0; while(level <= 1) { setTimeout( "gradient('" + id + "'," + level + ")", (level* 1000) + 10); level += 0.01; } } // Open the lightbox function openbox(formtitle, fadin) { var box = document.getElementById('box'); document.getElementById('shadowing').style.display='block'; var btitle = document.getElementById('boxtitle'); btitle.innerHTML = formtitle; if(fadin) { gradient("box", 0); fadein("box"); } else { box.style.display='block'; } } // Close the lightbox function closebox() { document.getElementById('box').style.display='none'; document.getElementById('shadowing').style.display='none'; } //pass form fields into variables var divexugsotherugsexams1 = document.getElementById('divexugsotherugsexams1'); var exugsotherugsexams1 = document.form4.exugsotherugsexams1.value; function passform() { divexugsotherugsexams1.innerHTML = document.form4.exugsotherugsexams1.value; } The HTML(with just one text field try): <p><input name="submit4" type="submit" class="button2" id="submit4" value="Preview Note" onClick="openbox('Preview Note', 1)"/> </p> <div id="shadowing"></div> <div id="box"> <span id="boxtitle"></span> <div id="divexugsotherugsexams1"></div> <script>document.write('<PARAM name="SRC" VALUE="'+exugsotherugsexams1+'">')</script> <a href="#" onClick="closebox()">Close</a> </div>

    Read the article

  • Unified data source for k2 installed Joomla websites

    - by Özkan ÖZLÜ
    I am responsible for a few web sites of my organization. I use Joomla! 2.5.9 for those web sites. They all are running at the same server. I use K2 component for content managing. I have a general website in which shows all the staff information at the 'Staff' page. Also some of those people and their contents are shown in another department's website. So, there are databases for each web site. For example: In the general website (let's say general.org), when I click on the 'Staff' menu item, page shows all of the people work at my organization. Also they work at different departments. In another web site (eg: education.general.org) when I click on the 'Staff' menu item, it shows the people work at education department. But for each web site, I have different user accounts which means a modification in one of them does not affect the other one. If the one of the education staff tries to change his profile picture on the education web site, he also has to do it on the general web site. And sometimes one person might be working at two departments. Thus he has to edit three times of his data. Is it possible to merge the records for all websites? In other words, I want everyone to insert/update their data on the general web site, and the other web sites will be updated automatically.

    Read the article

  • creating a tag-based website and not using programming?

    - by monodial
    I want to create a tag-based website, and I need a tool that I could use (preferably without programming). It's a site where a user could pick tags on a certain item. All tags will be placed under a group that they are logically linked to (I will do that by hand). On the other end - a visitor could choose a tag, and then be redirected to a few items on which that tag was selected the most. Besides this, I need to set up a registration form (for the visitors who want to select tags on a desired item). stackoverflow.com may serve as an example of what I want to achieve. Functionally it is a quite similar approach. I am not sure if further detailing will bring me closer to getting a development advice, but nevertheless - following this template what I would be missing on is: ability to categorize the tags; and so they would fit in one page (overall i assume <200 tags) box where a user could enter a tag and it would be pending until a certain number of users enter such tag ability to limit the number of 'questions' that appear when a visitor chooses a tag - 'question' stands for an item to which users are selecting tags (displayed items would depend on the frequency the tag was assigned - say the top two items) Which software should I try / How should I go about it? Thank you. Lukas P.S. I have bought hosting account through GoDaddy.com. This is a first website that I am trying to build.

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Best practice for SEO "special characters" in products pages

    - by rhodesit
    Whats a best practice for creating websites do to the fact that i need to enter "ö" within the content/title/meta. Should I spell it without, and just use a "normal" character or do i put in this code everywhere. or do i spell it half the time with and half the time without. whats the best practice for seo? Google takes into account user intent. Which makes things complicated(in my mind). The user will be searching without the "special characters" but because of the whole "user intent" thing, I don't know the best practice for this situation is. Should I use a mix of both spellings? Should I use the special characters in anchortext/headers/title/metadescription?

    Read the article

  • SEO & Multilingual: would be this a good practise?

    - by Younès
    I am currently making a bilingual website and I'd like to get nice SEO results of course. Here's my idea: The internal links would be composed of the "www" subdomain so that people can share links regardless of their language. Anyway, their language is determined by the HTTP_ACCEPT_LANGUAGE PHP variable. So, they would see http:// www.site.com/mydocument/123 in their adress bar and never see any links like "http:// fr.site.com/mydocument/123" or "http://en.site.com/mydocument/123" The user can always switch the page's language thanks to links in the footer. The switching language link would be : http:// fr.site.com/mydocument/123 , and clicking on it would change his language session and redirects the user to http:// www.site.com/mydocument/123 In case of a crawling bot: I read that if the HTTP_USER_LANGUAGE variable was missing then it's a crawling bot. So, in that case, we set the defaut language as English. Each page, as I mentionned earlier, has a link for another language: On the page: http:// www.site.com/document/1323, the link http:// fr.site.com/document/1323 can be seen by the bot and be crawled. What do you think about this practise ? Would I get good SEO results for each language ?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >