Search Results

Search found 10402 results on 417 pages for 'macbook pro'.

Page 193/417 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Responsive Design: Which Framework Should I Use? CSS3 & HTML5

    - by Jayhal
    I've been looking for a suitable set of HTML5/CSS3 foundation files to start new projects on. I started off piecing together my own files, but I believe I might be better served in finding a solid and fairly compatible(with me) CSS3/HTML5 framework and then tweaking certain things that may not best suit my own process. I'd love to find something that is responsive and that includes aspects focusing on layout, type(hor and vert baselines), form and interface components, cross-browser issues, and preferably built on something other than a just imple css reset, but that does include rebuilding elements consistently across browsers for a clean work slate. Extra features like polyfills or others area great, as is good documentation and examples. So far, off the top of my head I know of, Skeleton 1140 Grid 320 & Up (plus BP) HTML5 Boilerplate 2.0 and Mobile Inuit.css Less Framework Fluir Perkins.Less A few WP themes Are there any great one I don't know about? I work a lot in WP, and something that is easily incorporated (but also stand alone) is ideal. Plugins and wide set feature while maintaining the ability to cut it down when needed(flexibility) is also a big plus, and in par with a faster learning, since I want to start using whatever I find immediately . What are some of the better options you guys might be able to recommend? Systems or scripts, plugins, and other related tools are also welcome, Thanks!

    Read the article

  • I want to trace the activity of my customize link sent via email or on chat to my custome

    - by anilkumble789
    I want to trace the activity of my customize link sent via email or on chat to my customer. Activity like : whether they opened the link or not? How much time they were on page? examples: I have decided to sent business proposal link to Mr.ABC and Mr.XYZ So, for ABC the link would be like : www.mycompany.com/proposal ....abc... So, for XYZ the link would be like : www.mycompany.com/proposal ....xyz... its like link analytic. How to go ahead with with it?

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

  • SEO: 301 for a page which has no mirrow path?

    - by Alex
    Hello, I just did a 301 and the domain and some pages which have a mirror file path are fine. But I have one directory which is not going to be part of the new site and I don't know how to redirect the old files that were there. I need something like this: oldDomain/oldDir/file.php and I need to make it redirect to newDomain/differentDir/file.php Is that possible? What is the 301 redirect rule for that? update I just added this rule as suggested by @Itai and it didn't work redirectMatch permanent ^/outdoors/trees/tanoak.php$ http://www.comehike.com/outdoors/trees/129/Tanoak any idea why?

    Read the article

  • How to combat negative SEO?

    - by Perturbed
    Someone has decided to create a hate blog on a hosted blogging service (wordpress.com) that bashes my company. The blog contains posts that completely flame myself, my service, and contains complete falsehoods about how I run my business. Without going into details, I'm pretty sure the author of this blog is an owner of a competing service (although it is authored completely anonymously). Frankly, I'm not sure if the content would qualify for defamation or not, but I really don't like the idea of spending money on a lawyer to even attempt to prove this. I also have no interest in retorting or even replying to the blog in any sort of way -- I feel this would justify the ludicrous claims that have been posted. Unfortunately, whoever wrote the blog was pretty smart about using key words that people commonly use to search for my service. Because my customer base is relatively small and local, our PageRank is not incredibly high. As a result, when someone Google's our business name, this blog is usually within the top five results (thankfully, it's never above the business' actual website, but it's usually within eyeshot). It's incredibly frustrating to hear from customers who have seen the link (luckily, most of the time they think the author is crazy). Is there anything I can do to combat this? Would it be worthwhile to setup my own hosted wordpress.com branded blog, in an effort to trump this wordpress.com with a blog that is more active of my own? TL;DR: Someone made a hate blog using wordpress.com and is now on the first page of my business' search results. What are my options?

    Read the article

  • phpBB - display a public message when the whole board is private

    - by Sparky672
    In my installation all discussion forums/boards are private and can only be seen after a user registers and admin approves their membership. This is working. When a user first comes to the main phpBB page and they're not logged in, they can see very little except for the login and registration functions. Now I'm trying to configure phpBB to display a Read Me posting that is available to general public before registration or login. The problem is that I'd have to create a whole "public" forum and it would likely contain only one topic thread called "Read Me". The users would have to click into the public forum and then click again to get into the Topic called "Read Me". Is it possible to have a Topic thread outside of a forum? If not, is there another, better, way to achieve this? I just want to display a message, maybe a few short paragraphs to the public, explaining the purpose of the Forum and that they must register and wait for approval in order to gain additional access. I'm not seeing anything like this in the ACP. Thank-you.

    Read the article

  • Joomla 1.6 site cannot add a new extension through admin interface

    - by Ghlouw
    I'm having a very frustrating problem with my Joomla 1.6 site. I cannot add any new extensions through the admin interface. I have tried to upload the extension, or to use the search folder option or even the direct link. Neither of these options work, and all that happens is that the page tries to load forever until it finally timesout with a blank white page (no further error messages). I have tried this with multiple browsers (Chrome,FF,IE) and I have tried it with different extensions (modules, components, templates - all the same result). So I don't think it has anything to do with what I am uploading, but more likely the problem is something with the post action. I have also seen the exact same error occur when I try to update menu items or even create new menu items. I am not getting this error with a duplicate of the site in the dev environment, but only get this on my shared web hosting live server. This is on a Windows IIS / PHP / mySQL environment. Any help would be much appreciated!

    Read the article

  • .htaccess redirect to subfolder in different domain, maintaining old domain in the URL

    - by Naoise Golden
    Redirect has been widely discussed and most problems solved, so I am sorry for opening yet another post about this, but none of the codes I am trying work. I have a WordPress site hosted in http://mydomain.com/clientsdomain.com/wordpress I would like to temporarily redirect http://clientsdomain.com/ to the abovementioned URL, maintaining the clientsdomain.com domain in the URL. So for example http://clientsdomain.com/some/page would be pointing to http://mydomain.com/clientsdomain.com/wordpress/some/page Is this even possible with .htaccess? Maybe som configuration or plugin option with WordPress?

    Read the article

  • creating a tag-based website and not using programming?

    - by monodial
    I want to create a tag-based website, and I need a tool that I could use (preferably without programming). It's a site where a user could pick tags on a certain item. All tags will be placed under a group that they are logically linked to (I will do that by hand). On the other end - a visitor could choose a tag, and then be redirected to a few items on which that tag was selected the most. Besides this, I need to set up a registration form (for the visitors who want to select tags on a desired item). stackoverflow.com may serve as an example of what I want to achieve. Functionally it is a quite similar approach. I am not sure if further detailing will bring me closer to getting a development advice, but nevertheless - following this template what I would be missing on is: ability to categorize the tags; and so they would fit in one page (overall i assume <200 tags) box where a user could enter a tag and it would be pending until a certain number of users enter such tag ability to limit the number of 'questions' that appear when a visitor chooses a tag - 'question' stands for an item to which users are selecting tags (displayed items would depend on the frequency the tag was assigned - say the top two items) Which software should I try / How should I go about it? Thank you. Lukas P.S. I have bought hosting account through GoDaddy.com. This is a first website that I am trying to build.

    Read the article

  • Recommend hosting with fast MySQL database please [closed]

    - by Keith Groben
    Possible Duplicate: How to find web hosting that meets my requirements? I am frustrated to no end with my current hosting provider, mediaTemple. Yes, they are flashy, and have some decent degree of flexibility with their GS plan, which I have. But anytime I install a site that needs a database, it is slow. like really slow. Taking anywhere from 10 - 15 seconds just to load a page. I would host in house, but there are a lot of complications that come with a LAMP server that I don't want to deal with. Honestly, I'd rather spend the time developing. What can you recommend?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • Weird keywords in google webmaster tools

    - by Argoron
    I just happened to check the keywords list on Google Webmaster Tools for my site, which is an educational content site about finance. To my big surprise, after the first keyword, which is 'finance', I found amongst the 20 highest ranked (!) entries words like: mysql, server, adobe, flash, player, homez. What (i'm tempted to add "the heck") does that mean ? Is that something I should worry about? If so, how did these get there and how can I eliminate these / avoid they get into that list ? Thanks very much in advance for your help

    Read the article

  • Virtual Pageview Goal Funnel Not Tracking Correctly

    - by cphill
    I have an AJAX form that has three stages: 1. The landing page where a user fills out a form and selects between three question sets and clicks begin assessment 2. The assessment page, where users fill out questions relating to the question set that they selected on the landing page. 3.The results page, which shows whether they are at High Risk or Low Risk. Since this is an AJAX form that does not open a new page for each step of the process, I implemented a virtual pageview that would fire on the pageload of each step of the form process. The following is my virtual pageview setup for each stage: /form/begin-assessment /form/assessment/* (* = Three different virtual pageviews depending on the users selection of the three sets of questions: /one, /two, /three) 3./form/finished-assessment I have set up three separate goals to track user progress through each step of the form assessment. Here is my Goal setup: Goal Description: -Goal Type: Destination Goal Details: -Destination: /form/finished-assessment -Funnel: On Step 1: /form/begin-assessment (Required: Yes) Step 2: /form/assessment/one (Step 2: replace /one with /two or /three and you have my two other goals setup) Now my goals are recording the correct data in the first step and show the completions in the destination, but the second step does not show any drop offs. They show the same data as the destination. Any ideas of how I set up the goals wrong?

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • GA tracking utm query params after hashbang

    - by hybrid9
    We currently use a hashbang for the portion of our site that generates dynamic content which can also be deep linked. Our analytics team wants to use utm params to track the referral traffic from social networks. We are using Universal Analytics (analytics.js) as well as GTM. Will GA pick up the query parameters after the hashbang or does it always have to go before? For example: example.com/#!/some/content?utm_source=foo&utm_campaign=bar example.com?utm_source=foo&utm_campaign=bar/#!/some/content In #1, I'm concerned that the utm params won't be recorded and in #2 the page will break or the url could be incorrectly written. How does GA pull in those parameters - location.search? regex? Can I get away with using either?

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • I am the Webmaster now. Where do I start? [closed]

    - by John C
    I just changed jobs and will soon be in charge of a custom-built ASP.NET CMS and website for a fairly large corporation with global offices. I have IT and developer FTE resources available to me but I am trying to build a list of branding, project, and functionality points to review. What guides or lists can/should I use to evaluate this website before I begin adding features, creating new projects, or even redesigning and redeveloping the site? (I have been a webmaster/designer/developer for small, WordPress/Drupal sites for 10 years. I have been an unofficial webmaster (director/content manager) for a large site for 3 years (no direct development control over Sharepoint administration, IIS, or hosting ... but everything else, I did. Analytics, email, advertising, social, SEO, etc.).) Thank you!

    Read the article

  • Password protect an alias virtual directory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Centring an HTML element relative to its parent when its width is greater than its parent. [closed]

    - by casr
    I mocked up my intended outcome. So the blue element is the main content of the website and the yellow element represents something like a diagram or an image that has a greater width than the blue element. Ideally, I would like a purely CSS solution that is able to deal with various sizes of images. I have tried various things but have failed so far. I hope you can help! Here’s some example markup to set you on your way. <!DOCTYPE HTML> <html> <head> <title>Example</title> <style> #el1 { display: block; margin: 0 auto; width: 30em; background-color: #8cabde } #el2 { /* works when the width is less than the parent */ display: block; margin: 0 auto; } </style> </head> <body> <article id=el1> <p>Some content above.</p> <img id=el2 src=http://i.imgur.com/JFfGG.gif title=spaceball width=600 height=400> <p>Some content below.</p> </article> </body> </html>

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >