Search Results

Search found 1108 results on 45 pages for 'stats'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • a way to use log4j pass values like java -DmyEnvVar=A_VALUE to my code

    - by raticulin
    I need to pass some value to enable certain code in may app (in this case is to optionally enable writing some stats to a file in certain conditions, but it might be anything generally). My java app is installed as a service. So every way I have thought of has some drawbacks: Add another param to main(): cumbersome as customers already have the tool installed, and the command line would need to be changed every time. Adding java -DmyEnvVar=A_VALUE to my command line: same as above. Set an environment variable: service should at least be restarted, and even then you must take care of what user is the service running under etc. Adding the property in the config file: I prefer not to have this visible on the config file so the user does not see it, it is something for debugging etc. So I thought maybe there is some way (or hack) to use log4j loggers to pass that value to my code. I have thought of one way already, although is very limited: Add a dummy class to my codebase com.dummy.DevOptions public class DevOptions { public static final Logger logger = Logger.getLogger(DevOptions.class); In my code, use it like this: if (DevOptions.logger.isInfoEnabled()){ //do my optional stuff } //... if (DevOptions.logger.isDebugEnabled()){ //do other stuff } This allows me to use discriminate among various values, and I could increase the number by adding more loggers to DevOptions. But I wonder whether there is a cleaner way, possibly by configuring the loggers only in log4j.xml??

    Read the article

  • Google Analytics, Install Tracking android

    - by vvieux
    Hi, I want track install referer for my application using google analytics. I don't want use the Tracking Pageviews and Events feature, only install. So I added the sdk jar in my app, add these lines to the manifest : <receiver android:name="com.google.android.apps.analytics.AnalyticsReceiver" android:exported="true"> <intent-filter> <action android:name="com.android.vending.INSTALL_REFERRER" /> </intent-filter> </receiver> And publish the app. But how can see the stats ? I never entered my UA-xxxxxxx id. For the Pageviews and Events tracking it's here : tracker.start("UA-YOUR-ACCOUNT-HERE", this); But as thew readme says : (NOTE: do not start the GoogleAnalyticsTracker in your Application onCreate() method if using referral tracking). But with referer where do I put my id ? And what is the url to watch in the google analytics console ? Thx

    Read the article

  • ndarray field names for both row and column?

    - by Graham Mitchell
    I'm a computer science teacher trying to create a little gradebook for myself using NumPy. But I think it would make my code easier to write if I could create an ndarray that uses field names for both the rows and columns. Here's what I've got so far: import numpy as np num_stud = 23 num_assign = 2 grades = np.zeros(num_stud, dtype=[('assign 1','i2'), ('assign 2','i2')]) #etc gv = grades.view(dtype='i2').reshape(num_stud,num_assign) So, if my first student gets a 97 on 'assign 1', I can write either of: grades[0]['assign 1'] = 97 gv[0][0] = 97 Also, I can do the following: np.mean( grades['assign 1'] ) # class average for assignment 1 np.sum( gv[0] ) # total points for student 1 This all works. But what I can't figure out how to do is use a student id number to refer to a particular student (assume that two of my students have student ids as shown): grades['123456']['assign 2'] = 95 grades['314159']['assign 2'] = 83 ...or maybe create a second view with the different field names? np.sum( gview2['314159'] ) # total points for the student with the given id I know that I could create a dict mapping student ids to indices, but that seems fragile and crufty, and I'm hoping there's a better way than: id2i = { '123456': 0, '314159': 1 } np.sum( gv[ id2i['314159'] ] ) I'm also willing to re-architect things if there's a cleaner design. I'm new to NumPy, and I haven't written much code yet, so starting over isn't out of the question if I'm Doing It Wrong. I am going to be needing to sum all the assignment points for over a hundred students once a day, as well as run standard deviations and other stats. Plus, I'll be waiting on the results, so I'd like it to run in only a couple of seconds. Thanks in advance for any suggestions.

    Read the article

  • Parse files in directory/insert in database

    - by jakesankey
    Hey there, Here is my dillema... I have a directory full of .txt comma delimited files arranged as shown below. What I want to do is to import each of these into a SQL or SQLite database, appending each one below the last. (1 table)... I am open to C# or VB scripting and just not sure how to accomplish this. I want to only extract and import the data starting BELOW the 'Feat. Type,Feat. Name, etc' line. These are stored in a \mynetwork\directory\stats folder on my network drive. Ideally I will be able to add functionality that will make the software/script know not to re-add the file to the database once it has already done so as well. Any guidance or tips is appreciated! $$ SAMPLE= $$ FIXTURE=- $$ OPERATOR=- $$ INSPECTION PROCESS=CMM #4 $$ PROCESS OPERATION=- $$ PROCESS SEQUENCE=- $$ TRIAL=- Feat. Type,Feat. Name,Value,Actual,Nominal,Dev.,Tol-,Tol+,Out of Tol.,Comment Point,_FF_PLN_A_1,X,-17.445,-17.445,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Y,-195.502,-195.502,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Z,32.867,33.500,-0.633,-0.800,0.800,, Point,_FF_PLN_A_2,X,-73.908,-73.908,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Y,-157.957,-157.957,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Z,32.792,33.500,-0.708,-0.800,0.800,, Point,_FF_PLN_A_3,X,-100.180,-100.180,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Y,-142.797,-142.797,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Z,32.768,33.500,-0.732,-0.800,0.800,, Point,_FF_PLN_A_4,X,-160.945,-160.945,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Y,-112.705,-112.705,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Z,32.719,33.500,-0.781,-0.800,0.800,, Point,_FF_PLN_A_5,X,-158.096,-158.096,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Y,-73.821,-73.821,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Z,32.756,33.500,-0.744,-0.800,0.800,, Point,_FF_PLN_A_6,X,-195.670,-195.670,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Y,-17.375,-17.375,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Z,32.767,33.500,-0.733,-0.800,0.800,, Point,_FF_PLN_A_7,X,-173.759,-173.759,0.000,-999.000,999.000,, Point,_FF_PLN_A_7,Y,14.876,14.876,0.000,-999.000,999.000,,

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Why is my Android app force closing when I try to check if an EditText has a double

    - by user336861
    Scanner scanner = new Scanner(lapsPerMile_st); if (!scanner.hasNextDouble()) { Context context = getApplicationContext(); String msg = "Please Enter Digits and Decmials Only"; int duration = Toast.LENGTH_LONG; Toast.makeText(context, msg, duration).show(); lapsPerMileEditText.setText(""); return; } else { //Edit box has only digits, Set data and display stats data.setLapsPerMile(Integer.parseInt(lapsPerMile_st)); lapsRunLabel.setVisibility(0); lapsRunTextView.setText(Integer.toString(data.getLapsRun())); milesRunLabel.setVisibility(0); milesRunTextView.setText(Double.toString(data.getLapsRun()/data.getLapsPerMile())); } <EditText android:id="@+id/mileCount" android:layout_width="100dp" android:layout_height="wrap_content" android:layout_marginTop="110dp" android:inputType="numberDecimal" android:maxLength="4" /> For some reason if I enter a non decimal number such as 3, or 5, it works fine but when I enter a floating point such as 3.4 or 5.8 it force closes. I cant seem to figure out whats going on. Any ideas? Thanks

    Read the article

  • Software usage analytics in C#

    - by TiernanO
    I have a project i am working on currently and would like to implement some sort of software tracking in the code. ideally, stuff like how often its launched. how long it runs for, feature tracking, etc. I already use Exceptioneer for unhandled exceptions, but would like something similar for usage tracking. this data should all be anonymous and ideally run as a service by someone else. and i would like to give the users the option to turn it off, if they so wish to... So, is this something i should implement myself, or are there third parties out there that do this sort of things? i know it might be a sticky area, but i have seen stats about iPhone app usage. they do it, so why cant we? (if the user agrees, of course) [Update] Based on the comments, i should have been more clear. this is a Winforms .NET 4. application, though i am thinking of updating it later with WCF. i would only be tracking my own application, though i would also want to know minor information about environment (Windows OS Version, SP, maybe proc and ram...)

    Read the article

  • In R, how to get powers of ten in bold font in a plot label?

    - by wfoolhill
    I want to have "10^4 points" in bold as my x-axis label. I know how to make a simple label in bold: plot(1:10, xlab="") mtext(text="10 points", side=1, font=2, line=3) Thanks to this answer, I know how to make a label with a power of ten but nothing is in bold: mtext(text=expression(paste(10^4, " points")), side=1, font=2, line=3) Thanks to this answer, I also know how to make a label with a Greek letter in bold: mtext(text=expression(bold(paste(beta, "=", 10^1, " points"))), side=1, line=3) But still the power of ten is not in bold! It doesn't work either with bquote: mtext(text=bquote(bold(10^1~points)), side=1, line=3) Any idea? Here are some details about my system. Let me know if you need anything else. > sessionInfo() R version 2.15.0 (2012-03-30) Platform: x86_64-redhat-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=C LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • Looking for an appropriate design pattern

    - by user1066015
    I have a game that tracks user stats after every match, such as how far they travelled, how many times they attacked, how far they fell, etc, and my current implementations looks somewhat as follows (simplified version): Class Player{ int id; public Player(){ int id = Math.random()*100000; PlayerData.players.put(id,new PlayerData()); } public void jump(){ //Logic to make the user jump //... //call the playerManager PlayerManager.jump(this); } public void attack(Player target){ //logic to attack the player //... //call the player manager PlayerManager.attack(this,target); } } Class PlayerData{ public static HashMap<int, PlayerData> players = new HashMap<int,PlayerData>(); int id; int timesJumped; int timesAttacked; } public void incrementJumped(){ timesJumped++; } public void incrementAttacked(){ timesAttacked++; } } Class PlayerManager{ public static void jump(Player player){ players.get(player.getId()).incrementJumped(); } public void incrementAttacked(Player player, Player target){ players.get(player.getId()).incrementAttacked(); } } So I have a PlayerData class which holds all of the statistics, and brings it out of the player class because it isn't part of the player logic. Then I have PlayerManager, which would be on the server, and that controls the interactions between players (a lot of the logic that does that is excluded so I could keep this simple). I put the calls to the PlayerData class in the Manager class because sometimes you have to do certain checks between players, for instance if the attack actually hits, then you increment "attackHits". The main problem (in my opinion, correct me if I'm wrong) is that this is not very extensible. I will have to touch the PlayerData class if I want to keep track of a new stat, by adding methods and fields, and then I have to potentially add more methods to my PlayerManager, so it isn't very modulized. If there is an improvement to this that you would recommend, I would be very appreciative. Thanks.

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • How do I split ONE array to two separate arrays based on magnitude size and a threshold?

    - by youhaveaBigego
    I have an array which has BIG numbers and small numbers in it. I got it from after running a log from WireShark. It is the total number of Bytes of TCP traffic. But Wireshark does not discriminate(it would actually try, and hence it will tell you the traffic stats of ALL types of traffic, but since This is how the Array look like : @Array=qw(10912980 10924534 10913356 10910304 10920426 10900658 10911266 10912088 10928972 10914718 10920770 10897774 10934258 10882186 10874126 8531 8217 3876 8147 8019 68157 3432 3350 3338 3280 3280 7845 7869 3072 3002 2828 8397 1328 1280 1240 1194 1193 1192 1194 6440 1148 1218 4236 1161 1100 1102 1148 1172 6305 1010 5437 3534 4623 4669 3617 4234 959 1121 1121 1075 3122 3076 1020 3030 628 2938 2938 1611 1611 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 583 370 178) When you look at these this array carefully, one thing is obvious to the human eye. There are really BIG numbers and small numbers. (Basically what I am saying is, there is the 1% class and low income class, no middle class). I want to split the array to two different arrays. That would require me to set a threshold. Array 1 should be ONLY the BIG numbers (10924534-10874126), and array 2 should be the smaller numbers (68157-178). Btw, the array is not sorted. User will NOT input the threshold, and hence should be determined smartly.

    Read the article

  • Doing a global count of an object type (like Users), best practice?

    - by user246114
    Hi, I know keeping global counters is frowned upon in app engine. I am interested in getting some stats though, like once every 24 hours. For example, I'd like to count the number of User objects in the system once every 24 hours. So how do we do this? Do we simply keep a set of admin tool functions which do something like: SELECT FROM com.me.project.server.User; and just see what the size of the returned List is? This is kind of a bummer because the datastore would have to deserialize every single User instance to create the returned list, right? I could optimize this possibly by asking for only the keys to be returned, so the whole User object doesn't have to be deserialized. Then again, a global counter for # of users probably would create too much contention, because there probably won't be hundreds of signups a minute for the service I'm creating. How should we go about doing this? Getting my total number of users once a day is probably a pretty typical operation? Thank you

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • No result when Rally.data.WsapiDataStore lacks permissions

    - by user1195996
    I'm calling Ext.create('Rally.data.WsapiDataStore', params), and looking for results with the load event. I'm requesting a number of objects across programs that the user may or may not have read permission for. This works fine for queries where the user has permissions. But in the case where the user does not have permission and presumably gets zero results back, the load event does not seem to fire at all. I would expect it to fire with the unsuccessful flag or else to return with empty results. Since I don't know that the request has failed, my program waits and waits. How can I tell if a this request fails to return because of security? BTW, looking at the network stats, I believe all my requests get a "200 OK" status back. Here is the method I use to create the various data stores: _createDataStore: function(params) { this.openRequests++; var createParams = { model: params.type, autoLoad: true, // So I can later determine which query type it is, and which program requestType: params.requestType == undefined ? params.type : params.requestType, program: this.program, listeners: { load: this._onDataLoaded, scope: this }, filters: params.filters, pageSize: params.pageSize, fetch: params.fetch, context: { project: this.project, projectScopeUp: false, projectScopeDown: true }, pageSize: 1 // We only need the count }; console.log('_createDataStore', this.program, createParams.requestType); Ext.create('Rally.data.WsapiDataStore', createParams); }, And here is the _onDataLoaded method: _onDataLoaded: function(store, data, successB) { console.log('_onDataLoaded', this.program, successB); ... I only see this function called for those queries for which the account has permissions.

    Read the article

  • When my object is no longer referenced why do its events continue to run?

    - by Ryan Peschel
    Say I am making a game and have a base Buff class. I may also have a subclass named ThornsBuff which subscribes to the Player class's Damaged event. The Player may have a list of Buffs and one of them may be the ThornsBuff. For example: Test Class Player player = new Player(); player.ActiveBuffs.Add(new ThornsBuff(player)); ThornsBuff Class public ThornsBuff(Player player) { player.DamageTaken += player_DamageTaken; } private void player_DamageTaken(MessagePlayerDamaged m) { m.Assailant.Stats.Health -= (int)(m.DamageAmount * .25); } This is all to illustrate an example. If I were to remove the buff from the list, the event is not detached. Even though the player no longer has the buff, the event is still executed as if he did. Now, I could have a Dispel method to unregister the event, but that forces the developer to call Dispel in addition to removing the Buff from the list. What if they forget, increased coupling, etc. What I don't understand is why the event doesn't detatch itself automatically when the Buff is removed from the list. The Buff only existed in the list and that is its one reference. After it is removed shouldn't the event be detached? I tried adding the detaching code to the finalizer of the Buff but that didn't fix it either. The event is still running even after it has 0 references. I suppose it is because the garbage collector had not run yet? Is there any way to make it automatic and instant so that when the object has no references all its events are unregisted? Thanks.

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • Why won't ruby recognize Haml under ubuntu64 while using jekyll static blog generator?

    - by oldmanjoyce
    I have been trying, quite unsuccessfully, to run henrik's fork of the jekyll static blog generator on Ubuntu 64-bit. I just can't seem to figure this out and I've tried a bunch of different things. Originally I posted this over at stackoverflow, but this is probably the better spot for it. The base stats of my machine: Ubuntu 9.04, 64 bit, ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux], rubygems 1.3.1. When I attempt to build the site, this is what happens: $ jekyll --pygments Configuration from ./_config.yml Using Sass for CSS generation You must have the haml gem installed first Using rdiscount for Markdown Building site: . - ./_site /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/core_ext.rb:27:in `method_missing': undefined method 'header' for #, page=# ..... cut ..... (NoMethodError) from (haml):9:in `render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'instance_eval' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:72:in 'render_haml_in_context' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:105:in 'do_layout' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/post.rb:226:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:172:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'each' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:210:in 'transform_pages' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:126:in 'process' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/jekyll:135 from /home/chris/.gem/bin/jekyll:19:in `load' from /home/chris/.gem/bin/jekyll:19 I added spaces to the left of the ClosedStruct to enable better visibility - sorry that my inline html/formatting isn't perfect. I also cut out some middle text that is just data. $ gem list *** LOCAL GEMS *** actionmailer (2.3.4) actionpack (2.3.4) activerecord (2.3.4) activeresource (2.3.4) activesupport (2.3.4) classifier (1.3.1) directory_watcher (1.2.0) haml (2.2.3) haml-edge (2.3.27) henrik-jekyll (0.5.2) liquid (2.0.0) maruku (0.6.0) open4 (0.9.6) rack (1.0.0) rails (2.3.4) rake (0.8.7) rdiscount (1.3.5) RedCloth (4.2.2) stemmer (1.0.1) syntax (1.0.0) Some showing for path verification: $ echo $PATH /home/chris/.gem/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games $ which haml /home/chris/.gem/bin/haml $ which jekyll /home/chris/.gem/bin/jekyll

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >