Search Results

Search found 1108 results on 45 pages for 'stats'.

Page 12/45 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • mod_rewrite apache

    - by Peter
    is there any way to hide redirected url, here is what I think: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*)$ http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI}&force So the long redirected url http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} to something shorter like /mintedomain.com/track/ It is possible? Adrian edit: Andrew: This is a stats software Mint (haveamint.com) with File Download tracker plugin. The File Download tracker works in this way: in .htaccess every file (zip, rar, txt,...) is redirected to the tracker.php file (because the stats): http://mydomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} So the redirected url look like this for a zip file: http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://mydomain/downloads/apple.zip This redirected URL is very long and ugly. The best for me would be to redirect this redirected URL to something shorter URL: example: http://mydomain.com/track/downloads/apple.zip.. So the http://mydomain.com/track would be the http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php

    Read the article

  • computing z-scores for 2D matrices in scipy/numpy in Python

    - by user248237
    How can I compute the z-score for matrices in Python? Suppose I have the array: a = array([[ 1, 2, 3], [ 30, 35, 36], [2000, 6000, 8000]]) and I want to compute the z-score for each row. The solution I came up with is: array([zs(item) for item in a]) where zs is in scipy.stats.stats. Is there a better built-in vectorized way to do this? Also, is it always good to z-score numbers before using hierarchical clustering with euclidean or seuclidean distance? Can anyone discuss the relative advantages/disadvantages? thanks.

    Read the article

  • Using numpy.apply

    - by andylei
    What's wrong with this snippet of code? import numpy as np from scipy import stats d = np.arange(10.0) cutoffs = [stats.scoreatpercentile(d, pct) for pct in range(0, 100, 20)] f = lambda x: np.sum(x > cutoffs) fv = np.vectorize(f) # why don't these two lines output the same values? [f(x) for x in d] # => [0, 1, 2, 2, 3, 3, 4, 4, 5, 5] fv(d) # => array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Any ideas?

    Read the article

  • Statistics based marketing campaign measurement tools

    - by AFHood
    Currently using SAS as measurement engine and Business Objects as display layer. Looking to develop a new, faster, slicker solution. Has anyone developed or purchased a campaign measurement reporting system? This solution should measure everything from email stats, web stats, customer activity, lift, ROI, etc. Ok.. I'm researching and finding nada... We are working with a team from India and they want to re-write everything from scratch.. Any solutions out there at all?

    Read the article

  • Looking for SQL Server Performance Monitor Tools

    - by the-locster
    I may be approaching this problem from the wrong angle but what I'm thinking of is some kind of performance monitor tool for SQl server that works in a similar way to code performance tools, e.g. I;d like to see an output of how many times each stored procedure was called, average executuion time and possibly various resource usage stats such as cache/index utilisation, resultign disk access and table scans, etc. As far as I can tell the performance monitor that comes with SQL Server just logs the various calls but doesn't report he variosu stats I'm looking for. Potentially I just need a tool to analyze the log output?

    Read the article

  • Reading / Writing from a Unix Socket in Ruby

    - by Olly
    I'm trying to connect, read and write from a UNIX socket in Ruby. It is a stats socket used by haproxy. My code is the following: require 'socket' socket = UNIXSocket.new("/tmp/haproxy.stats.socket") # First attempt: works socket.puts("show stat") while(line = socket.gets) do puts line end # Second attemp: fails socket.puts("show stat") while(line = socket.gets) do puts line end It succeeds the first time, but on the second attempt fails. I'm not sure why. # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt, stats,FRONTEND,,,0,0,2000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,, stats,BACKEND,0,0,0,0,2000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,22,0,,1,1,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0, legacy_socket,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,, all,FRONTEND,,,0,0,10000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,3,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,1,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,2,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,3,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,4,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,5,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,6,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,22,22,,1,4,7,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,21,21,,1,4,8,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,21,21,,1,4,9,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,socket,0,0,0,0,200,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,21,21,,1,4,10,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, socket_backend,BACKEND,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,DOWN,0,0,0,,1,21,21,,1,4,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0, api_backend,api,0,0,0,0,200,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,22,0,,1,5,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,0,,,,0,0, api_backend,api,0,0,0,0,1,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,22,0,,1,5,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,0,,,,0,0, api_backend,api,0,0,0,0,1,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,0,1,21,21,,1,5,3,,0,,2,0,,0,L4CON,,0,0,0,0,0,0,0,0,,,,0,0, api_backend,BACKEND,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,22,0,,1,5,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0, www_backend,ruby-www,0,0,0,0,10000,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,22,0,,1,6,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,0,,,,0,0, www_backend,BACKEND,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,22,0,,1,6,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0, /Users/Olly/Desktop/haproxy_stats.rb:14:in `write': Broken pipe (Errno::EPIPE) from /Users/Olly/Desktop/haproxy_stats.rb:14:in `puts' from /Users/Olly/Desktop/haproxy_stats.rb:14 What is the problem? Is there a good reference to using UNIX sockets and Ruby?

    Read the article

  • Mod_rewrite shortening url .htaccess

    - by Peter
    is there any way to hide redirected url, here is what I think: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*)$ http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI}&force So the long redirected url http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} to something shorter like /mintedomain.com/track/ It is possible? Adrian edit: Andrew: This is a stats software Mint (haveamint.com) with File Download tracker plugin. The File Download tracker works in this way: in .htaccess every file (zip, rar, txt,...) is redirected to the tracker.php file (because the stats): http://mydomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} So the redirected url look like this for a zip file: http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://mydomain/downloads/apple.zip This redirected URL is very long and ugly. The best for me would be to redirect this redirected URL to something shorter URL: example: http://mydomain.com/track/downloads/apple.zip.. So the http://mydomain.com/track would be the http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php

    Read the article

  • in-memory database in Python

    - by Claudiu
    I'm doing some queries in Python on a large database to get some stats out of the database. I want these stats to be in-memory so other programs can use them without going to a database. I was thinking of how to structure them, and after trying to set up some complicated nested dictionaries, I realized that a good representation would be an SQL table. I don't want to store the data back into the persistent database, though. Are there any in-memory implementations of an SQL database that supports querying the data with SQL syntax?

    Read the article

  • Mailer issue, PHP values do not change

    - by Roland
    I have a script that runs once every month and send out stats to clients, now the stats are displayed in normal text and in the shape of a Pie Graph, now if I run the script mannually from the command line all info on the graphs are correct, but when the cron job executes the script the values for the first client are displaying on the graphs of all clients. but the text is correct. I'm using domDocument to build the HTML and PHPMailer to send out the email with the Graphs embedded into the mail also use pChart to generate the Graph My code that generates the PIE graph is below include_once "pChart.1.26e/pChart/pData.class"; include_once "pChart.1.26e/pChart/pChart.class"; // Dataset definition unset($DataSet); $DataSet = new pData; $DataSet->AddPoint(array($data['total_clicks'],$remaining),"Serie1"); if($remaining < 0){ $DataSet->AddPoint(array("Clicks delivered todate","Clicks remaining = 0"),"Serie2"); }else{ $DataSet->AddPoint(array("Clicks delivered todate","Clicks remaining"),"Serie2"); } $DataSet->AddAllSeries(); $DataSet->SetAbsciseLabelSerie("Serie2"); // Initialise the graph $pie = new pChart(492,292); $pie->drawBackground(255,255,254); $pie->LineWidth = 1.1; $pie->Values = 2; // $pie->drawRoundedRectangle(5,5,375,195,5,230,230,230); //$pie->drawRectangle(0,0,480,288,169,169,169); $pie->drawRectangle(5,5,487,287,169,169,169); $pie->loadColorPalette('pChart.1.26e/color/tones-3.txt',','); // Draw the pie chart $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",18); $pie->drawTitle(140,33,"Campaign Overview",0,0,0); $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",11); $pie->drawTitle(343,125,"Total clicks : ".$total_clicks,0,0,0); $pie->setFontProperties("pChart.1.26e/Fonts/calibri.ttf",10); if($remaining < 0){ $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",10); $pie->drawTitle(260,250,"Campaign over-delivered by ".substr($remaining,1)." clicks",205,53,53); $pie->setFontProperties("pChart.1.26e/Fonts/calibri.ttf",10); } $pie->drawPieLegend(328,140,$DataSet->GetData(),$DataSet->GetDataDescription(),255,255,255); $pie->drawPieGraph($DataSet->GetData(),$DataSet->GetDataDescription(),170,150,130,PIE_VALUE,FALSE,50,30,0); $pie->Render("generated/3dpie.png"); unset($pie); unset($DataSet); $mail->AddEmbeddedImage("/var/www/html/stats/generated/3dpie.png","5"); I just can't understand why this only happens when the cronjob runs?

    Read the article

  • Every 3rd Insert Is Slow On Ms Sql 2008

    - by Chris
    I have a function that writes 3 lines into a empty table like so: INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 8, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 8, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 8, 3) For some reason only the third query takes a long time to execute - and with each insert it grows longer. Profiler Image I have tried disabling all constraints on the table - same result. I just can't figure out why the first two would run so fast - and the last one would take so long. Any help would be greatly appreciated. Here is the statistics for a query ran MSSMS: Query: ALTER TABLE [dbo].[yaf_ForumAccess] NOCHECK CONSTRAINT ALL INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 9, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 9, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 9, 3) ALTER TABLE [dbo].[yaf_ForumAccess] CHECK CONSTRAINT ALL Stats: Stats

    Read the article

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • OptionParser python module - multiple entries of same variable?

    - by jduncan
    I'm writing a little python script to get stats from several servers or a single server, and I'm using OptionParser to parse the command line input. #!/usr/bin/python import sys from optparse import OptionParser ... parser.add_option("-s", "--server", dest="server", metavar="SERVER", type="string", help="server(s) to gather stats [default: localhost]") ... my GOAL is to be able to do something like #test.py -s server1 -s server2 and it would append both of those values within the options.server object in some way so that I could iterate through them, whether they have 1 value or 10. Any thoughts / help is appreciated. Thanks.

    Read the article

  • How to gather usage statistics for iPhone app?

    - by FX
    I am in the process of releasing my first iPhone app. It's a simple utility, I'd just like to gauge the release process, app lifetime and trends, just so it can help make more realistic choices in future apps. I think it would be nice to have usage statistics in addition to download stats from Apple. For example, how many times is the app opened by each user, what iPhone OS version do they have, etc. I think some of it would simply be to try and connect to a known URL on one of my domains, passing it anonymous information (let's say, connect to http://mydomain.net/stats?app=myApp&version=1.0.0&os=3.1.2&used=18). My questions are: is that forbidden in any way by Apple's rules? (none that I could find, at least) does that seem reasonable to you? are there existing frameworks that would do that simpler/better that writing my own code?

    Read the article

  • Outputting an HTML entity character from a helper function

    - by morpheous
    I am using Symfony 1.3.2 on Ubuntu. I have written a little helper function (statsfoo) that prints out summary statistics about an item. I am using the helper function in my template like this: // In StatsHelper.php <?php function statsfoo($some_param) { return "<div class=\"sfoo\">&9830; the stats number for item is 42</div>" } //In showStatsSuccess.php <?php use_helper(Stats); <?php echo statsfoo($foobar, ESC_ENTITIES); I tried both ESC_ENTITIES and ESC_RAW. In both instances, the raw number (&9830) was displayed in the page. I want to display the diamond instead. What am I doing wrong and how can I fix this?

    Read the article

  • Linq ChangeConflictException occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? How can I fix it? Update I just debugged the error and found the following: Table name: dbo.Stats current value: 9852039 original value: 9852038 database value: 9852039 The Stats table is updated constantly. So how can I still make LINQ save the changes. With "classical" SQL Server access through SqlDataCommand I never had problems like that.

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • Calculating statistics directly from a CSV file

    - by User1
    I have a transaction log file in CSV format that I want use to run statistics. The log has the following fields: date: Time/date stamp salesperson: The username of the person who closed the sale promo: sum total of items in the sale that were promotions. amount: grand total of the sale I'd like to get the following statistics: salesperson: The username of the salesperson being analyzed. minAmount: The smallest grand total of this salesperson's transaction. avgAmount: The mean grand total.. maxAmount: The largest grand total.. minPromo: The smallest promo amount by the salesperson. avgPromo: The mean promo amount... I'm tempted to build a database structure, import this file, write SQL, and pull out the stats. I don't need anything more from this data than these stats. Is there an easier way? I'm hoping some bash script could make this easy.

    Read the article

  • OS monitoring using JAVA

    - by Puneri
    I'm planning to implement a framework for monitoring OS level resources: process network stats cpu info etc using JAVA. I see there is SIGAR API by Spring, which is implemented in native language and JAVA API being provided on top. But I will prefer not to have native stuff in my framework, rather for each OS will write a Java Class which will fetch required OS info by running system commands via JAVA Runtime. So I would like to have inputs/suggestions that one may have seen of not doing this in JAVA and use native app/api/jni. Any example will help for sure. I agree each OS has different commands to get these stats, but will prefer to have a Java Class per OS than have/load native code.

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • AWStats: Visits from IP address vs Crawlers

    - by user3651934
    I use AWStats in cPanel to see stats of my website. Under Hosts section I see one IP address that has visited 150 pages. I am not sure if one person would have visited 150 pages using a browser. But if these 150 pages have been visited using a software application, then should not it be listed under Robots/Spider section. So how do I determine if I should block a certain IP address that has visited several hundred pages of my website? Thanks

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • Blogging tips for SQL Server professionals

    - by jamiet
    For some time now I have been intending to put some material together relating my blogging experiences since I began blogging in 2004 and that led to me submitting a session for SQLBits recently where I intended to do just that. That didn’t get enough votes to allow me to present however so instead I resolved to write a blog post about it and Simon Sabin’s recent post Blogging – how do you do it? has prompted me to get around to completing it. So, here I present a compendium of tips that I’ve picked up from authoring a fair few blog posts over the past 6 years. Feedburner Feedburner.com is a service that can consume your blog’s default RSS feed and provide another, replacement, feed that has exactly the same content. You can then supply that replacement feed on your blog site for other people to consume in their RSS readers. Why would you want to do this? Well, two reasons actually: It makes your blog portable. If you ever want to move your blog to a different URL you don’t have to tell your subscribers to move to a different feed. The feedburner feed is a pointer to your blog content rather than being a copy of it. Feedburner will collect stats telling you how many people are subscribed to your feed, which RSS readers they use, stuff like that. Here’s a sample screenshot for http://sqlblog.com/blogs/jamie_thomson/: It also tells you what your most viewed posts are: Web stats like these are notoriously inaccurate but then again the method of measurement here is not important, what IS important is that it gives you a trustworthy ranking of your blog posts and (in my opinion) knowing which are your most popular posts is more important than knowing exactly how many views each post has had. This is just the tip of the iceberg of what Feedburner provides and I recommend every new blogger to try it! Monitor subscribers using Google Reader If for some reason Feedburner is not to your taste or (more likely) you already have an established RSS feed that you do not want to change then Google provide another way in which you can monitor your readership in the shape of their online RSS reader, Google Reader. It provides, for every RSS feed, a collection of stats including the number of Google Reader users that have subscribed to that RSS feed. This is really valuable information and in fact I have been recording this statistic for mine and a number of other blogs for a few years now and as such I can produce the following chart that indicates how readership is trending for those blogs over time: [Good news for my fellow SQLBlog bloggers.] As Stephen Few readily points out, its not the numbers that are important but the trend. Search Engine Optimisation (SEO) SEO (or “How do I get my blog to show up in Google”) is a massive area of expertise which I don’t want (and am unable) to cover in much detail here but there are some simple rules of thumb that will help: Tags – If your blog engine offers the ability to add tags to your blog post, use them. Invariably those tags go into the meta section of the page HTML and search engines lap that stuff up. For example, from my recent post Microsoft publish Visual Studio 2010 Database Project Guidance: Title – Search engines take notice of web page titles as well so make them specific and descriptive (e.g. “Configuring dtsConfig connection strings”) rather than esoteric and meaningless in a vain attempt to be humorous (e.g. “Last night a DJ saved my ETL batch”)! Title(2) – Make your title even more search engine friendly by mentioning high level subject areas, not dissimilar to Twitter hashtags. For example, if you look at all of my posts related to SSIS you will notice that nearly all contain the word “SSIS” in the title even if I had to shoehorn it in there by putting it in square brackets or similar. Another tip, if you ARE putting words into your titles in this artificial manner then put them at the end so that they’re not that prominent in search engine results; they’re there for the search engines to consume, not for human beings. Images – Always add titles and alternate text (ALT attribute) to images in your blog post. If you use Windows 7 or Windows Vista then you can use Live Writer (which Simon recommended) makes this easy for you. Headings – If you want to highlight section headings use heading tags (e.g. <H1>, <H2>, <H3> etc…) rather than just formatting the text appropriately – again, Live makes this easy. These tags give your blog posts structure that is understood by search engines and RSS readers alike. (I believe it makes them more amenable to CSS as well – though that’s not something I know too much about). If you check the HTML source for the blog post you’re reading right now you’ll be able to scan through and see where I have used heading tags. Microsoft provide a free tool called the SEO Toolkit that will analyse your blog site (for free) and tell you what things you should change to improve SEO. Go read more and download for free at Search Engine Optimization Toolkit. Did I mention that it was free? Miscellaneous Tips If you are including code in your blog post then ensure it is formatted correctly. Use SQL Server Central’s T-SQL prettifier for formatting T-SQL code. Use images and videos. Personally speaking there’s nothing I like less when reading a blog than paragraph after paragraph of text. Images make your blog more appealing which means people are more likely to read what you have written. Be original. Don’t plagiarise other people’s content and don’t simply rewrite the contents of Books Online. Every time you publish a blog post tweet a link to it. Include hashtags in your tweet that are more likely to grab people’s attention. That’s probably enough for now - I hope this blog post proves useful to someone out there. If you would appreciate a related session at a forthcoming SQLBits conference then please let me know. This will likely be my last blog post for 2010 so I would like to take this opportunity to thank everyone that has commented on, linked to or read any of my blog posts in that time. 2011 is shaping up to be a very interesting for SQL Server observers with the impending release of SQL Server code-named Denali and I promise I’ll have lots more content on that as the year progresses. Happy New Year. @Jamiet

    Read the article

  • 4th Annual Hartford Code Camp - The Code Camp Manifesto lives on!

    - by SB Chatterjee
    It is amazing that Thom Robbins' blog posting back in December 2004 laid the foundation of the Code Camps that have grown world-wide - there is at least one every week-end in some country (unscientific tweets stats sampling). This week end, we at the Connecticut .NET Developers Group had the 4th Annual Hartford Code Camp and it was well attended with 120+ attendees with ~30 sessions. Our thanks to the Speakers from near and far who made our event a success.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >