Search Results

Search found 9724 results on 389 pages for 'pro zeck'.

Page 130/389 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Looking for someone to point me in the right direction. I want to learn how to use hosted servers

    - by Leisure
    TL;DR: I want a Java program to run on a server, I want the server to forward a particular port from external to internal IP, I want store a few files on the server. Guides please. So I made a hack job Java program that acts as a server for my android application. It stores data in text files and HTML files, uploads them via FTP to my webhost, and manages socket connections (using port forwarding) with any phones connected. Right now I'm running it on NetBeans on my home computer. I know that it will probably slow down or crash once about 50 phones are connected at once. Is there any way I can run this program on a server with a high bandwidth? Can someone please find me a guide for that? I'm noob and don't know where to start looking. I seriously don't know anything about renting or using servers - I need a nice guide, and recommendations. My requirements for the server: Can handle about 2k socket connections at once Can run my Java code and store my txt files Can give me a port and an IP address so TCP/IP clients can be connected My budget: $50 CAD per month. Please someone set my ship sailing in the right direction, I really don't know where to look for resources.

    Read the article

  • Does Google sometime prevent new white hat sites from ranking at all in some verticals?

    - by JVerstry
    Assuming someone wants to implement a new viagra or akai berry e-commerce website. There is a lot of competition and this site does not really bring something new, other than a new online counter to buy products at a nice price. Assuming this site does not use any black hat techniques and that it stays with Google quality guidelines, and assuming it has no (or few) backlinks (from non-authoritative websites). Assuming this website's pages are indexed properly in Webmaster Tool, and that no penalties are reported. No site improvements are suggested. Google crawls the site daily as reported in GWT. No robots.txt configuration issues. Does Google sometime decide to no rank this site for any user query (for weeks), because of lack of original content? The reason I am asking this is that I am trying to understand the possible cause of a similar situation I am observing with two sites. If so, what is the way out to start ranking for these site? If not, does it mean the cause is elsewhere for sure? Any confirmed info to get out of the maze is welcome.

    Read the article

  • Google Analytics showing more unique visitors than there are pages on an intranet site

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them. How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • Valid robots.txt? [closed]

    - by psot
    I am waiting for Google to crawl my site and display the results in search. Is my robots.txt alright and will it let google, bing etc crawl my site? Thanks! User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/ Disallow: /build/ Disallow: /css/ Disallow: /trackback/ Disallow: /comments Disallow: /assets/graphics/ Disallow: /assets/visual/ Disallow: /category/*/* Disallow: */trackback Disallow: */feed Disallow: */comments Disallow: /*?* Disallow: /*? User-agent: Slurp Disallow: / User-agent: Baiduspider Disallow: / User-agent: ia_archiver Disallow: / User-agent: duggmirror Disallow: / User-agent: Yandex Disallow: / Sitemap: http://example.com/sitemap.xml.gz

    Read the article

  • How can I choose between Linux and Windows hosting?

    - by Mohamad
    I am a relative beginner when it comes to choosing web servers and hosting plans. I'm about to signup for a hosting plan with GoDaddy. My main requirement is ColdFusion and MySQL. The plans on offer include Linux and Windows based plans. Which one should I choose, and why? I don't have a lot of requirements other than what I mentioned above. I never used Linux before but I doubt I'll ever need to do anything beyond tampering with my account. What are the main advantages of one over the other?

    Read the article

  • Extremely large spike in traffic on the 1st - 4th of every month from mobile browsers

    - by wsanville
    I've noticed that on the 1st - 4th of the recent months (since January), several sites I maintain are getting thousands of requests from mobile browsers, whereas throughout the rest of the month, the numbers are in the single or double digits. Has anybody else noticed this sort of behavior? I don't have the exact user agents logged, but my analysis software (WebTrends) reports the traffic as mostly iPhone/iPad/iPod, Android, and Blackberry.

    Read the article

  • <meta name="robots" content="noindex"> in "Fetch as Google"

    - by Rodrigo Azevedo
    I don't know why but when I execute "fetch as Google" it returns me HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.5 Set-Cookie: ASPSESSIONIDQACRADAQ=ECAINNFBMGNDEPAEBKBLOBOP; path=/ X-Powered-By: ASP.NET Date: Wed, 26 Jun 2013 15:18:29 GMT Content-Length: 153 <meta name="robots" content="noindex"> The noindex doesn't exist. Does anybody know what could be wrong?

    Read the article

  • Server overhead caused by bots?

    - by giuseppe
    I have one customer website causing overhead (http://www.modacalcio.it/en/by-kind/football-boots.html). With htop opened, I am trying navigate the website and the much load of the website is done by the ajax link being placed on the left side of the website. The website is hosted by a VPS with 3 proc and 2GB RAM, with enough hard with disk space. The real problem is that this website is new and not visited much. From the http-status module I am seeing that the overhead is caused by bots (Google bots, Bing bots, hrefs checker and so on). So I thought that's probably due to those spiders trying to crawl all those links at once - could this be causing this overhead? I have also put rel="nofollow" in those links, but this doesn't keep the bots away. Is there any way through code or Plesk to disable those links to those bots?

    Read the article

  • Dotted subdomain name or new domain?

    - by Catalin Ilinca
    I have a company website hosted at www.BRAND.com (where BRAND is a generic name). The company want to develop a "micro website" for one of their campaigns, named "Inspired By BRAND". I have two directions: inspired.by.BRAND.com - which I personally don't like too much. I don't know why but I don't recall any web address similar to this one subdomain.subdomain.domain.com. inspired.BRAND.com - which I this is best suited for it. Fewer dots and similar to "more friendly" addresses subdomain.domain.com. Any hints, guidelines, any thoughts is well appreciated. Thanks in advance

    Read the article

  • WordPress injection?

    - by saul
    I don't really know how to express my problem, so bear with me. This is a bit hard to explain. I have a Wordpress installation, the latest, and often (once a day) my site redirects users to the /wp-admin/install.php file. Asking for my login credentials of course. I have tried reinstalling WordPress and still have not been able to figure what they are doing. That happens regularly. Also, a few hours later, I am able to see my site normally. Hope this makes sense. I suspect there myst be some database DoS that allows them to inject a redirect of some sort into my admin area, thus redirecting the user to said directory (install.php). But that's just me. I really have no clue what else could they be doing. I looked at the source code from several php files and noted some of them don't include a ? tag. Could that be an issue? My hosting company is iPage, I've contacted them and they say there's nothing wrong with my files. Anyone have a clue? I can paste the code to any source file.

    Read the article

  • Safety of purchasing country-specific domains from registrars?

    - by Marc Bollinger
    None of the previous questions tackle some of the one-off (or further) countries' registries, beyond .co.uk, .it, et al. or else I'd have found an answer myself. Is it safe to buy a domain from a foreign country TLD from a registrar? http://www.iana.org/domains/root/db/ I'm just looking for information for a vanity domain, so obviously I'm alright without an answer, but it's an unasked question (or at least, unanswered), and I'm not exactly in a hurry to give my credit card information over country lines, sight unseen.

    Read the article

  • My parked domain was de-indexed by Google - what to do?

    - by Programmer Joe
    I have a question about how to handle my domain. In a nutshell, I bought a domain last year from Go Daddy. My intention was to launch a real site with this domain and I have spent the last year working on my site. For the last year, I have been using the default Go Daddy page display for an up and coming site. When I first bought this site, it was indexed by Google - you could search for "alphabanter" and my site would show up on the search result page for Google. Several months ago, it seemed Google de-indexed my domain and if you type "alphabanter," my domain no longer shows up on the list of search results. However, if you search for "www.alphabanter.com", that's the only way it shows up in the search results for Google. Anyways, I am about to launch my site for real. However, I don't quite know if I can get my site back into Google's index. I have a few questions: 1) Was my domain permanently penalized by Google and removed from their index just because it was a parked domain? I don't believe I have done anything abusive other than using the Go Daddy default page for almost a year because my site was not ready. 2) Should I just launch my site, put a few backlinks to my site, and hope that Google indexes my site again? 3) Should I submit my site to Google at Google submit your content I assume getting Google to reconsider my site is the last option if none of the above works.

    Read the article

  • how to fix bad seo after being hacked

    - by mkprogramming
    About a year ago my wordpress website was hacked & some company decided to go nuts and actually do some "SEO" on the various links it created. Some of the pages would show up on google as "payday cash advance" instead of "portfolio". The issue has been resolved, but now as I've been doing GOOD seo, I've noticed (when checking backlinks) that there are TONS of links still on the internet (mostly broken sites now) that have links to my website with titles like: "get a loan today" and so on. Is there a way to remove these links ? Can I tell google to ignore them ? Help !

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • using Moniker.com's nameservers

    - by user7519
    I have a VPS with A2Hosting for which i need to upgrade the OS. However, they've changed their VPS packages and forced me to order a new one. I went with an "unmanaged" package and have only just realised that they do not provide any DNS service at all, not even nameservers. Support tells me that "since your domain is not hosted with us, but with Moniker, you would not be able to use these nameservers. Your domain registrar should have a set of default nameservers that you can use, then create a A record to point to" my IP address. Moniker does provide for using their nameservers but i'm confused about which "pre-defined zone configuration" to use. They are: Domain Parking Domain Parking with Email Forwarding URL and Email Forwarding URL Forwarding URL Forwarding & CoolHandle Email I just want to use their nameserver and then create A & MX records pointing to the VPS. What do they mean by forwarding? I get the feeling it's a service that i don't want. Or, is it that i need to have a pre-defined zone only temporarily, and THEN set the A & MX? Which of these should i choose.

    Read the article

  • CSS specificity: Why isn't CSS specificity weight of 10 or more class selectors greater than 1 id selector? [migrated]

    - by ajc
    While going through the css specificity concept, I understood the fact that it is calculated as a 4 parts 1) inline (1000) 2) id (100) 3) class (10) 4) html elments (1) CSS with the highest rule will be applied to the corresponding element. I tried the following example Created more than 10 classes <div class="a1"> .... <div class="a13" id="id1"> TEXT COLOR </div> ... </div> and the css as .a1 .a2 .a3 .a4 .a5 .a6 .a7 .a8 .a9 .a10 .a11 .a12 .a13 { color : red; } #id1 { color: blue; } Now, even though in this case there are 13 classes the weight is 130. Which is greater than the id. Result - JSFiddle CSS specificity

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Silverstripe: How can I disable comments?

    - by SamIAm
    My client site is built in Silverstripe, there is a news page, and it allows people to leave comments. Unfortunately we've got loads of spam emails. I'm new to this, is there any way we can disable the comment field by default? How do I do it? Alternatively is there easy way for me to install a spam protection? Update - Because this is someone else's code, I just realised that they have some sort of spam protection already, so we are trying to disable comments now. I have manage to set no comment as default by changing file BlogEntry.php static $defaults = array( "ProvideComments" => true, 'ShowInMenus' => false ); to static $defaults = array( "ProvideComments" => false, //changed 'ShowInMenus' => false ); Am I on the right track to disable comments by default? Also how can I stop on the news page showing xxx comments link? eg Test Posted by Admin on 21 June 2011 | 3 Comments Tags: P This is a test.... 3 comments | Read the full post

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Convenient practice for where to place images?

    - by Baumr
    A lot of developers place all image files inside a central directory, for example: /i/img/ /images/ /img/ Isn't it better (e.g. content architecture, on-page SEO, code maintainability, filename maintainability, etc.) to place them inside the relevant directories in which they are used? For example: example.com/logo.jpg example.com/about/photo-of-me.jpg example.com/contact/map.png example.com/products/category1-square.png example.com/products/category2-square.png example.com/products/category1/product1-thumb.jpg example.com/products/category1/product2-thumb.jpg example.com/products/category1/product1/product1-large.jpg example.com/products/category1/product1/product2-large.jpg example.com/products/category1/product1/product3-large.jpg What is the best practice here regarding all possible considerations (for static non-CMS websites)? N.B. The names product1-large and product1-thumb are just examples in this context to illustrate what kind of images they are. It is advised to use descriptive filenames for SEO benefit.

    Read the article

  • Uploading a non-finished website

    - by Daniel
    I have a pretty basic question. I developed a neet little website wich I'm ready to upload, but still needs a bit of work. The designer needs the html to do his work so the website needs to be uploaded. Besides that, I have to correct a couple details, do the friendly-urls, etc. What's the best way to set up the webpage in the definitive hosting with the defintive domain, blocking it to any unknown users and without affecting affecting SEO and those kind of things. If I were to just upload it, the non-definitive website might be crawled by a SE-bot. Thanks!

    Read the article

  • The layout page "~/Views/Shared/_Layout.cshtml" could not be found

    - by Rei Brazilva
    I got this error and I can't figure out what is going on. I am positive the _layout.cshtml resides in the shared folder and for the sake of trying things out, I moved to the Home folder and it then told that the Views/Home/_Layout.cshtml couldn't be found there either. So now I'm thinking the problem is in the call for this file for some reason. I'm not going to pretend I know ASP.NET MVC4, so please when you answer, explain it as you would to someone who is not familiar with the system at all. Believe it or not, this error came from tutorial #1 ha ha Here's the code to show that I did code it right: @{ ViewBag.Title = "Home Page"; Layout = "~/Views/Shared/_Layout.cshtml"; } And here is a picture of the location p.s. I did my research, Google has nothing and there is another question here but it was asked on 2008 with MVC3 which is completely different I am running ASP.NET MVC4 on Azure Thanks

    Read the article

  • Programmatic removing Exit Popup from Page? [closed]

    - by Jose Garcia
    I have a page A which has exit popup. I want it to be show on Page B. I used iframe for displaying page A on B. Edit:Page A is having a Exit Popup which i dont want in Page 2. But Page A is having annoying popup. Assuming i can't edit Code of Page A. Can i just make some code in my page B . To remove Exit Popup? Please provide me with sample code. I would prefer it to run on My Lamp Shared hosting. I can use anything in place of Iframe if need be. Thanks.

    Read the article

  • Best way to track multiple sites with Google Analytics

    - by stevether
    I currently have 63 websites (and counting) that I'm tracking on one Google Analytics account, and I'm starting to realize... this is becoming a bit cumbersome. What's the best way to collect traffic data in bulk? Are there other resources out there that are better suited for this task? Does Google offer a bulk option for this kind of thing? Would it be better to make separate analytics accounts? I'm just wondering if anyone else has had found a better solution that manually setting up all these accounts/setting up the tracking codes etc, when it comes to large scale management.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >