Search Results

Search found 4721 results on 189 pages for 'traffic'.

Page 142/189 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • unreachable subnet in one direction

    - by Carl Michaud
    I'm unable to route traffic to a subnet in my home. Here's my topology: INET <- Router A <- Router B <- WIFI AP where: Router A - ARRIS TG862G/CT (from comcast) - WAN DHCP, LAN 10.0.0.99 Router B - Linksys WRT400N with DD-WRT - WAN 10.0.0.12 , LAN 10.0.1.1 I was able to use DD-WRT to configure a static route for the 10.0.1.x network back to the 10.0.0.x network. But I'm not sure if i can do the reverse on the ARRIS router and was looking for suggestions on how to fix that. I've set things up this wasy because I am using OpenDNS to mange internet content for my kids and of course I wasn't able to configure DNS to my liking on Router A either. So at present I'm using Router A on 10.0.0.x to provide a private unfiltered WIFI AP (no ssid broadcast) that I use for netflix only and then I use router B to provide a filtered WIFI AP for the rest of my WIFI devices on 10.0.1.x. However I would like to be able to connect to my 10.0.1.x devices from the internet via dynamic dns; but I can't see anything behind router B this way. Thoughts?

    Read the article

  • Oracle Customer Experience (CX) Solutions Make Retailers Merry

    - by Tuula Fai
    Tis the season to be jolly. If you’re a retailer, your level of jolliness depends on sales. So you watch trends like U.S. store traffic increasing 3.5% to 308 million on Black Friday but sales actually falling 1.8% to $11.2 billion. Fortunately, by the end of November, retail sales were up 3.7% over the previous year, thanks to life recovering after Hurricane Sandy. And online sales topped $1 billion for the first time ever! Who are the companies improving their sales online? They are big names like Walgreen’s Drugstore.com, Nordstrom’s HauteLook, and Intuit. More importantly, how are they doing it? They use cutting-edge business practices enabled by Oracle’s CX Cloud Service & Support solutions to: Increase conversions rates and order sizes (Customer Acquisition) Enhance customer satisfaction and loyalty (Customer Retention) Reduce contact center costs and improve agent productivity (Operational Efficiency). Acquisition + Retention + Operational Efficiency = Sustainable Growth and Profits. That’s the magic formula for retail customer service success. Don’t take our word for it. Look at the results of these Oracle customers: Walgreen’s Drugstore—30% sales conversion rate on chat sessions with 20% increase in shopping cart size Nordstrom’s HauteLook—40,000+ interactions per month—20% growth over last year— efficiently managed by 40 agents, with no increase in IT costs Intuit—50% increase in customer satisfaction and 70% decrease in cost per interaction Using Oracle’s CX Cloud & Service solutions, these retailers deliver consistent, relevant, and personalized experiences across all touchpoints, including social, mobile, and web. Their ability to connect with customers anytime, anywhere—providing the right answer at the right time—helps them create a defensible advantage in the marketplace. Want to learn more? Please visit http://www.oracle.com/goto/cloudlaunchpad for free resources on delivering exceptional customer service in the Cloud. Also, watch our YouTube channel to learn more about seamless multichannel retail and Winston Furnishings’ exceptional customer experience.

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Geekswithblogs.net | Screen Resolutions of our Readers

    - by Jeff Julian
    Yesterday I talked about the Browsers we see being used by our readers driven off of our Google Analytics traffic and today I want to share with you the Screen Resolutions we see.  As a web developer most of my life, it is hard to decide how large you should build your application because typically you have a couple huge high resolution monitors on your desk, but you typical end user is thought to have 1024x768.  With HTML5/CSS3 out, it is a little better coming up with a design that will scale to all resolutions, but it is still nice to know the numbers when it comes to how much real estate do I have on my clients. If you look at these numbers for Geekswithblogs.net, we have a lot of high resolution monitors from users that visit the site.  After a little more investigation of the number you will notice we do not have as much height available as we do width.  If the primary goal of a site is to deliver as much data in the viewable area without scrolling, this becomes a challenge when most of our pages have long pieces of formatted data.  So our challenge is to build skins that use up more of the sides of the content toward the top on larger resolution browsers and then entice the reader to scroll to get the goodies embedded in the content of the posts.  Going to be an interesting battle for sure, but we really need more skin offerings on the site. Technorati Tags: Resolution Statistics,Geekswithblogs.net

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • Tracking down memory issues affecting a website

    - by gaoshan88
    I've got a website (Wordpress based) that became unresponsive. I SSH'd into the server and saw that we were out of memory. Errors in my apache log files indicated the same... things failing to be allocated due to lack of memory). Restarting the server fixes it. So I look in access.log and error.log around the time of the incident but I see nothing strange. No extra traffic, no unusual requests. In fact the only request around the time of the problem was one from Googlebot for an rss feed... at that point I start to see 500 response codes in the logs until the machine was rebooted. I look in message.log hoping to see something but there is nothing at all for that entire day (which is odd as there are entries for every other day). The site has a large amount of memory allocated to it and normally runs using about 30% of what is available. My question... how would you go about trying to track this down at this point? What are some other log files I could check or strategies I could take?

    Read the article

  • UFW blocking random packets on 443

    - by s2jcpete
    All, I have UFW setup to allow traffic on port 443. It works as expected, though I have a large amount of UFW Block log entries. To Action From -- ------ ---- 80 ALLOW Anywhere 443 ALLOW Anywhere 22222 ALLOW Anywhere 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere (v6) 22222 ALLOW Anywhere (v6) However in my syslog file I see this: [UFW BLOCK] IN=eth0 OUT= MAC=XXX SRC=<foreignip> DST=<serverip> LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=22025 DF PROTO=TCP SPT=49622 DPT=443 WINDOW=0 RES=0x00 ACK RST URGP=0 About 30 or so seconds later pound (which I'm using for SSL decryption and port redirection) throws a connection timed out messsage. I'm assuming this is because UFW is blocking the packet. I'm at a loss as to an explination. Could the packet be malformed or something, is this normal? Edit - I have since changed the /etc/defaults/ufw and set ipv6=no, so the v6 rules are no longer in the mix. The server is still showing the block / connection timed out behavior though. The new ufw status output is: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 80 ALLOW IN Anywhere 443 ALLOW IN Anywhere 22222 ALLOW IN Anywhere

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • Set UFW before.rules without restart of server

    - by enedene
    I use UFW on my Ubuntu server. Unfortunately there are no rules in UFW to port forward to another machine. What you need to do is edit /etc/before.rules and put routing commands there, for example # nat Table rules *nat :POSTROUTING ACCEPT [0:0] # Forward traffic from eth0 through eth1. -A POSTROUTING -s 192.168.0.0/24 -o eth1 -j MASQUERADE -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.0.200:80 -A PREROUTING -i eth1 -p udp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p udp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p tcp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p udp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p tcp --dport 3306 -j DNAT --to 192.168.0.200:3306 -A PREROUTING -i eth1 -p udp --dport 3306 -j DNAT --to 192.168.0.200:3306 COMMIT My problem is that I can't find a way to run new forwarding rules without restarting the server, which I hate to do very much. So please help me, is there a way?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • Hosting and domain registrations for multiple clients

    - by letseatfood
    I am finally getting regular work desiging, developing, and deploying websites for small businesses and individuals. So far the websites utilize single-user content management systems, so the websites create, as far as I know, minimal load on the shared servers. I have always required that each of my clients purchase annual shared hosting at Dreamhost. For domain registration, I ask that they register with Dreamhost, but some already have a registered domain elsewhere and this is fine with me. I do this so the billing issues are the client's responsibility, not mine. My question is: Since I can register unlimited domains and connect them to my one shared hosting account at Dreamhost, should I not be requiring clients to individually pay for shared hosting and a domain? Should I actually be paying for one hosting account and then hosting all of my client's websites on that account? As I said before, I currently have each client buy their own hosting, because I feel that, for example, if there is high traffic to their site, there would be less a chance of the site going down than if their site was hosted with many others on one account. I am famous for being long-winded, please let me know if I can clarify at all. Thanks!

    Read the article

  • Which is the best image hosting site for hosting images for website? [closed]

    - by rahul dagli
    I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • With Google Analytics, is it possible to check a specific page in Multi-Channel conversion attribution?

    - by Emmett R.
    I'm somewhat new to Google Analytics, and I'm trying to track all conversions that are assisted by a particular landing page, because I don't expect an instant purchase. I have e-commerce tracking set up. Due to the constraints of the associated ad campaign, I can't include the source/medium code in the url when people go to the landing page, and all of my traffic to the landing page is likely to be direct, so I'm not sure how to tell Multi-Channel marketing that it's a significant page. I know how to add events to a page, but I'm still figuring out what they can and cannot do. Would creating a redirect from the landing url to an identical url+source/medium code work? Any advice on how to accomplish this would be greatly appreciated. Tracking the final sale conversion is not the issue. Ecommerce reporting is functioning just fine on the site. I just want to report the landing page as an assist, whenever it shows up in the funnel, and I need to be able to do that across multiple visits.

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Google Analytics: Do unique events report as unique visits when triggered on pages other than your own domain?

    - by Jesse Gardner
    We just recently attached a SWF to our Brightcove video player to report various events back to Google Analytics. We're also tracking page views with a standard GA snippet on the page where the player is embedded. As I understand it, because a unique has already been recorded for the page, any event being triggered by the player is getting associated with that unique. However, we allow people to embed the video player on other websites. All of the event data started pouring into the Events section as expected, but we noticed a dramatic uptick in unique visitors on the site (nearly double) while the pageview count stayed relatively unchanged. Disabling event tracking brought the traffic back down to average levels. I should also add that in the Pages section of Event tracking we're seeing URLs for other sites where the player has been embedded; but this data isn't showing up in the Content section. It seems counterintuitive, but does GA count an event fired as a unique visit even if it's triggered from some place other than your website? Is so, there any way to trigger an event in the events section without it reporting to the unique visitor count?

    Read the article

  • Is it better to have multiple domains for cities or one single TLD?

    - by Brett
    I make websites for small businesses, and for some reason business owners love to have several domains with the same website but the TLD containing the city name. For example: 1. smallbizname.com 2. clevelandsmallbizname.com 3. columbussmallbizname.com 4. cincinnatismallbizname.com ... and so on. I've seen questions about localization per country aspects, but this is a much smaller scale, so I don't think the same rules apply. The problem I have is the companies never want to write separate content per domain, just have the same website hosted several times at each domain. I feel this probably hurts SEO for two reasons: 1. Traffic gets scattered throughout domains, could be boosting just one domain. 2. Duplicate content penalty because the content is identical. My question boils down to this... Should I redirect all the city domains to the main business name domain, or does having these separate sites help to rank better per city? And if they are redirected, how does google rank the redirects? Thanks for any input on this issue!!

    Read the article

  • Choosing the correct network protocol for my type of game (its Wc3 Warlock style)

    - by Moritz
    I need to code a little game for a school project. The type of the game is like the Warcraft 3 map "Warlock", if anyone doesnt know it, here is a short description: up to ten players spawn into an arena filled with lava, the goal of each player is to push the other players into the lava with spells (basically variations of missiles, aoe nukes, moba spells etc) http://www.youtube.com/watch?v=c3PoO-gcJik&feature=related we need to provide multiplayer-support over the internet, for that reason I am looking for the best network protocol for this type of game (udp, tcp, lock step, client-server...) what the requirements are: - same/stable simulation on all clients - up to ten players - up to ~100 missiles on the field - very low latency since its reaction based (i dont know the method wc3 used, but it was playable with the old servers) what would be nice (if even possible, since the traffic might be too big): - support for soft bodies over the network (with bullet physics), but this is no real requirement I read several articles about the lock step method used for RTS games, this seems to be great, but does it fit for real-time action games too (ping-related)? If anyone has run into the same problems/questions like me, I would be very happy about any help

    Read the article

  • Cannot submit change of address to subdomain in Google Webmaster Tools?

    - by RCNeil
    I am pointing several domains to one URL, a URL which happens to include a subdomain. ALL of the domains are using 301 redirects to point to this new address. One of the older domains (which used to be a site) is a 'property' in Webmaster Tools, as is the new site (the one with the subdomain.) When registering a 'Change of Address' for the old site with WebmasterTools, it suggests the following method - Set up your content on your new domain. (done) Redirect content from your old site using 301 redirects. (done) Add and verify your new site to Webmaster Tools. (done) Then, directly below that, to proceed, it says Tell us the URL of your new domain: Your account doesn't contain any sites we can use for a change of address. Add and verify the new site, then try again. I have already submitted and verified the new site. The only reason I can fathom I am getting this error is because the new site includes a subdomain. Although I don't foresee getting punished for this, as I am correctly 301 redirecting traffic anyway, I'm curious as to why the Change of Address submission isn't working appropriately for me. Has anyone else had experience with this?

    Read the article

  • JSR 308 Moves Forward

    - by abuckley
    I am pleased to announce a number of recent milestones for JSR 308, Annotations on Java Types: Adoption of JCP 2.8 Thanks to the agreement of the Expert Group, JSR 308 operates under JCP 2.8 from September 2012. There is a publicly archived mailing list for EG members, and a companion list for anyone who wishes to follow EG traffic by email. There is also a "suggestion box" mailing list where anyone can send feedback to the E.G. directly. Feedback will be discussed on the main EG list. Co-spec lead Prof. Michael Ernst maintains an issue tracker and a document archive. Early-Access Builds of the Reference Implementation Oracle has published binaries for all platforms of JDK 8 with support for type annotations. Builds are generated from OpenJDK's type-annotations/type-annotations forest (notably the langtools repo). The forest is owned by the Type Annotations project. Integration with Enhanced Metadata On the enhanced metadata mailing list, Oracle has proposed support for repeating annotations in the Java language in Java SE 8. For completeness, it must be possible to repeat annotations on types, not only on declarations. The implementation of repeating annotations on declarations is already in the type-annotations/type-annotations forest (and hence in the early-access builds above) and work is underway to extend it to types.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >