Search Results

Search found 4786 results on 192 pages for 'traffic shaping'.

Page 2/192 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • A lot of "(direct) / (none)" traffic in Google Analytics

    - by Yoga
    my web site has a lot of "(direct) / (none)" traffic (over 50%) in Google Analytics, but under the "Audience", 100% are new visitors, why is that? I am quite sure most of the Audience should be new visitor, but why so many "(direct) / (none)" traffic? Update: Actually we have launch a new site which this number drop significantly, so I am interested in knowing why the number was so high in the past.

    Read the article

  • Traffic Shaping using tc

    - by Simon
    Hi guys, I have a 1.5 Mbit/s link that i want to share with 150 users. My setup is the following: Linux box with 3 NICs eth0 - public ip eth1 - subnet A - 50 users (static ips) eth2 - subnet B - 100 users (via dhcp) I am using squid as a transparent proxy on port 3128. dhcp server using ports 67 and 68. I was creating, but I think packets are not going to the right queues #!/bin/bash DEV=eth0 RATE_MAIN=2048kbit CEIL_MAIN=2048kbit BURST=1b CBURST=1b RATE_DEFAULT=1024kbit CEIL_DEFAULT=$CEIL_MAIN PRIO_DEFAULT=3 RATE_P2P=1024Kbit CEIL_P2P=$CEIL_MAIN PRIO_P2P=4 RATE_IND=32kbit CEIL_IND=$CEIL_DEFAULT tc qdisc del dev $DEV root tc qdisc add dev $DEV root handle 1: htb default 30 tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE_MAIN ceil $CEIL_MAIN tc class add dev $DEV parent 1:1 classid 1:10 htb rate $RATE_DEFAULT ceil $CEIL_MAIN burst $BURST cburst $CBURST prio $PRIO_WEB ## some other sub class for p2p other traffic tc class add dev $DEV parent 1:1 classid 1:20 htb rate $RATE_P2P ceil $CEIL_P2P burst $BURST cburst $CBURST prio $PRIO_P2P $IPS_NET1=50 $IPS_NET2=100 let $IPS=$IPS_NET1+$IPS_NET2 for ((i=1; i<= $IPS; i++)) do let CLASSID=($i+100) let HANDLE=($i+100) tc class add dev $DEV parent 1:10 classid 1:$CLASSID htb rate $RATE_IND ceil $CEIL_IND tc qdisc add dev $DEV parent 1:$CLASSID handle $HANDLE: sfq perturb 10 done ## Generate IP addresses ## IP_ADDRESSES="" # Subnet A BASE_IP=10.10.10. for ((i=2; i<=$IPS_NET1+1; i++)) do TEMP="$BASE_IP$i" IP=ADDRESSES="$IP_ADDRESSES $TEMP" done # Subnet B BASE_IP=192.168.0. for ((i=2; i<=$IPS_NET2+1; i++)) do TEMP="$BASE_IP$i" IP_ADDRESSES="$IP_ADDRESSES $TEMP" done ## FILTERS ## j=1 U32="tc filter add dev $DEV protocol ip parent 1:0 prio $PRIO_DEFAULT u32" for NET in $IP_ADDRESSES; do let CLASSID=($j+100) $U32_DEFAULT match ip src $NET/32 flowid 1:$CLASSID $U32_DEFAULT match ip dst $NET/32 flowid 1:$CLASSID let j=j+1 done Can you guys help me figure out what's wrong with it? basically I want my classes to be 1:1 (1.5 Mbit ) 1:10 (1024 Kbit) 1:20 (1024 Kbit) (200 ips each with 32 kbit)

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • Prioritize bit torrent traffic

    - by Manish Mathai
    Hi, I would like to know how to prioritize traffic from various applications. Specifically I want to know if there is a way to give web traffic higher priority over bit torrent traffic. OS : Windows XP Browser : Firefox Bittorrent client : uTorrent Can I somehow shape the traffic such that, when I am browsing, bittorrent traffic gets suppressed (but not completely) and once no web traffic is detected , it is allowed to continue at full speed ?

    Read the article

  • I got this message from my host "Exceeded allocated monthly traffic" want to understand problems tha

    - by Amr ElGarhy
    I have a dedicated windows 2008 server and with Allocated monthly traffic: 1500GB, the hosting company sent me "Please take note that the allocated traffic included with your Budget (calculated by GB of traffic) has been exceeded. You will be billed for the exceeding traffic at the end of the month according to the per GB exceeding traffic fee specified on your contract." I checked my Google analytics account and didn't find any big different for the websites traffic this month than previous months. I just want to understand what may cause this sudden increase in traffic this month? may be ftp access? remoting to webserver too much time? or what may cause this? Also, is there any tool in the server to know where the traffic went?

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Impressions and traffic dropped by 70 %

    - by Louise
    Can anyone advise why my impressions dropped and traffic as well? I used to have very generic keywords such as: anti aging, anti wrinkle, face cream, eye cream. I thought they were bad and made the keywords more specific: anti wrinkle eye cream, anti aging face cream, etc. Following that change, my impressions and traffic dropped dramatically! I used to get 45+ visitors a day, now I get 15- visitors a day. What is the way forward? I thought what I did to the keywords was good?

    Read the article

  • Change alexa tracking from artcrew.ro to www.artcrew.ro

    - by DanTdr
    my website has a redirect from artcrew.ro to www.artcrew.ro but for some reason, alexa gets only the inbound links from the one without www in front, on the one with www in front i have over 2000 inbound links but on the one without www i have only 10. is there any way i could make alexa see the other inbound links? that would be grate. thanks

    Read the article

  • What naughty ways are there of driving traffic?

    - by Tom Wright
    OK, so this is purely for my intellectual curiosity and I'm not interested in illegal methods (no botnets please). But say, for instance, that some organisation incentivised link sharing in a bid to drive publicity. How could I drive traffic to my link? Obviously I could spam all my friends on social networking sites, which is what they want me to do, but that doesn't sound as fun as trying to game the system. (Not that I necessarily dispute the merit of this particular campaign.) The ideas I've come up with so far (in order of increasing deviousness) include: Link-dropping - This is too close to what they want me to do to be devious, but I've done it here (sorry) and I've done it on Twitter. I'm subverting it slightly by focusing on the game aspects rather than their desired message. AdWords - Not very devious at all, but effectively free with the vouchers I've accrued. That said, I must be pretty poor at choosing keywords, because I've seen very few hits (~5) so far. Browser testing websites - The target has a robots txt which prevents browsershots from processing it, but I got around this by including it in an iframe on a page that I hosted. But my creative juices have run dry I'm afraid. Does anyone have any cheeky/devious/cunning/all-of-the-above idea for driving traffic to my page?

    Read the article

  • Weird referral traffic [closed]

    - by Noam
    Possible Duplicate: Strange incoming links appearing on site statistics I'm getting weird traffic from Japan, from a site called ime.nu Why weird? because I'm not able to identify the link and also when going to their homepage link it just shows an Apache Test Page, while I'm seeing it is a pretty big site in analysis sites (Alexa Ranked 121 in JP) Can someone help me understand the mystery?

    Read the article

  • Increase traffic to site [duplicate]

    - by Jack Trowbridge
    This question already has an answer here: How can I increase the traffic to my site? 5 answers I have made a social networking site and it's been on the web for over 6 months and it has over 8,000 members but I want it to grow bigger. What tools/methods can I use to grow its popularity? e.g CEO, PPC advertising Tools/methods requiring money and without and comparisons? Thanks in advance, Jack.

    Read the article

  • How google handle site traffic in google analytics

    - by Hamidreza
    I have a site with address www.exam.com and I have put Google analytics javascript scripts in it. I have made an app for my site, I want that everytime a user uses app, he visit the site in the application with built in browser which is inside the application ( I am using C# for application and .NET web browser ). User will address www.example.com/appvisit in the app and I just have put google analytics scripts in that page and nothing else. And I want to disallow this address /appvisit in my robots.txt file . I want to know that Is there any problem with doing this? will google crawl in the /appvisit directory ? Does google hate this work? and will google think this traffic is true and normal? thanks

    Read the article

  • High Traffic Web Host Solution?

    - by Calsy
    Hi All, Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice. Thanks in advance

    Read the article

  • Huge surge in direct traffic from one particular town

    - by Jack Lockyer
    Last month I noticed that the direct visits on our site have increased by nearly 150% whilst bounce rate is also considerably up. After drilling down further I can see that we have had nearly 2000 direct visits from one town in Connecticut called Stamford, with a bounce rate of 100%! I have been scratching around for answers but I can only find that it may be to do with our uptime monitoring tool; Pingdom. Does anyone know/have any experience with this kind of issue, any help is appreciated I have just noticed that we are receiving identical traffic in a town in England and a town in Scotland... This definitely makes me think it's to do with our uptime monitoring tool.

    Read the article

  • How an offline main domain can influence traffic on an active sub domain

    - by danie7L T
    The website(s) design is for a company active in 3 different areas. As an example lets use the following structure: www.example.com [sub1.example.com] [sub2.example.com] [sub3.example.com] sub2.example.com and sub3.example.com are ready to go live but www.example.com really isn't and send a 503 http error code. I would like to know if this situation will affect the traffic and ranking of the subdomains ready to go live? Is it preferable to wait and go live with the main domain? Or there is nothing to "fear" and one doesn't affect the other? Thank you

    Read the article

  • My website not getting any traffic, How to get traffic? [closed]

    - by Divyanshu Negi
    Possible Duplicate: How can I increase the traffic to my site? I have done pretty nice SEO of my website , the website is made in php , anyone can submit article and the submitted articles are moderated by the moderators , the site is online from more then a month but still the user count is 10-20 only total impressions are 700 according to webmaster google , how much time does google webmaster takes to refresh the data , cause from 3 -4 days the impressions shown in the dashboard are 700 only , I am posting 2 article each day , please help me , I am very disappointed with all my effort and i really need a good motivation to carry on my work. Please help my website url is http://www.viewloud.com

    Read the article

  • How to track inbound HTTP traffic using Plesk 10.4.4?

    - by hypercrypt
    I am running Plesk 10.4.4 on a Debian 6.0 server. The outbound traffic is being tracked but the inbound http traffic seems to be 0 at all times, i.e. looking at the DomainsTraffic and ClientsTraffic the http_in column is always 0. Is this a setting that I have missed? I've had a look and cannot find anything. How do I get Plesk to track the inbound HTTP traffic? I have already made sure that Home Tools & Settings Server settings had 'Include in the traffic calculation' set to 'inbound and outbound traffic', yet this does not solve the problem. Apache allows inbound traffic to be logged using %I in the log format, is there a way to get Plesk to add the %I to the log and then use that in bandwidth calculations?

    Read the article

  • Referral traffic not appearing properly in Google Analytics

    - by Crashalot
    We have a partnership arrangement with another site where we pay them for users sent to us. However, they claim our referral numbers for them are lower than theirs by 50%. They are tracking clicks in Google Analytics (using events) while we are using visits in Google Analytics. Are we doing something wrong with our Google Analytics installation? <!-- Google Analytics BEGIN --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-12345678-1']); _gaq.push(['_setDomainName', 'example.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <!-- Google Analytics END -->

    Read the article

  • How much traffic a linux-based shaper would be able to chew

    - by facha
    Hi, everyone I have a linux based traffic shaper (iptables + tc htb policy). It works in bridge mode. Shapes traffic based on IPs and ports (there are about 100 rules in the "mangle" chain of iptables). Right now its throughoutput is about 100 mb/s (I don't remember pps, there are about 800 users in the network). Just was wondering - when I will hit the limit. How much traffic could a linux-based shaper possibly get throuhg it. If you have one under heavy load, please could you write what machine you use and what load there is. Or if you have any other info about the subj, please write as well. Thanks in advance.

    Read the article

  • Need solutions in sharing a 3Mb/768Kbps DSL line to 60+ users and faster bandwidth

    - by elistp
    Two parts. Part 1: We currently have 2 DSL Lines with 3Mb/768Kbps speeds load balanced for 60+ users. Accessing the Internet is borderline unusable. The simple solution would be to get a faster DSL Line but the highest DSL package is 6Mb/768Kbps, has quite the price jump, and doesn't do anything to help with upload speeds. I'm looking for free or extremely low cost solutions (web cache, traffic shaping, bandwidth controls, etc) to help with making Internet access more bearable until the next funding year. Can anyone give any advice? Part 2: We're looking into a 4.5Mb bonded T1 in the next funding year which is of course significantly more expensive than 2 DSL lines. Are bonded T1s our only hope for faster speeds? Are there any better alternatives?

    Read the article

  • radius traffic accounting - what attributes do I use for traffic (and how)

    - by Mark Regensberg
    we are building a web front end for a internet access token management system that uses radius (freeradius) queried from a captive portal. Reason for building this part is the integration into the accounting and billing platform that operates behind the scenes (all other parts are currently available open source software) The structure is fairly standard, and setting up the basic bits were easy enough (authentication, traffic updates from the captive portal, account expiry date/times) - but I seem to have run out of ability when it comes to limiting an account by traffic consumed. So we can: set up usernames / passwords set expiry dates/times for a given user see the traffic for that user being accurately updated in RADACCT But we can't figure out the correct way/attribute to expire a user when they have consumed X octets of traffic. What attributes are used, or - maybe more accurately - what would be the correct way to use these attributes to limit an account to a certain volume of traffic? Any links to documentation appreciated - freeradius documentation doesn't seem to address the issue directly, or I'm looking in the wrong place... --mark

    Read the article

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >