Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 160/299 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Load balancing and shedulling algorithms .NET

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs), I can predict how long approx. each job will take to be caclulated. Also I have priorities. My question is how to keep all machines loaded 99-100% and shedule the jobs in the best way. Each machine can do several calculations at the time. Jobs are pushed to the machine. Central machine knows current load of each machine. Also I would like to to assign some king of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How to distribute jobs(calculations) in the best possible way, also keep in mind priority. Any suggestions ? Ideas ? Algorithms ?

    Read the article

  • C++ option/long option implementation

    - by K Hein
    I am working on a parser for meta programming language using C++ on Linux platform. Right now, I need to implement option/long option for the parser to provide some additional features. Basically, if the user pass in some additional option, the parser needs to store statistics while parsing the text files. I can think of two ways to implement it. One way is to user global to store options entered by users. Another way is to create a singleton class to store options. So I would like to know if there is any other way to implement it. What is the best/most recommended way of implementing it? Thanks in advance. Regards, K.Hein

    Read the article

  • MySQL: filling empty fields with zeroes when using GROUP BY

    - by SaltLake
    I've got MySQL table CREATE TABLE cms_webstat ( ID int NOT NULL auto_increment PRIMARY KEY, TIMESTAMP_X timestamp DEFAULT CURRENT_TIMESTAMP, # ... some other fields ... ) which contains statistics about site visitors. For getting visits per hour I use SELECT hour(TIMESTAMP_X) as HOUR , count(*) AS HOUR_STAT FROM cms_webstat GROUP BY HOUR ORDER BY HOUR DESC which gives me | HOUR | HOUR_STAT | | 24 | 15 | | 23 | 12 | | 22 | 9 | | 20 | 3 | | 18 | 2 | | 15 | 1 | | 12 | 3 | | 9 | 1 | | 3 | 5 | | 2 | 7 | | 1 | 9 | | 0 | 12 | And I'd like to get following: | HOUR | HOUR_STAT | | 24 | 15 | | 23 | 12 | | 22 | 9 | | 21 | 0 | | 20 | 3 | | 19 | 0 | | 18 | 2 | | 17 | 0 | | 16 | 0 | | 15 | 1 | | 14 | 0 | | 13 | 0 | | 12 | 3 | | 11 | 0 | | 10 | 0 | | 9 | 1 | | 8 | 0 | | 7 | 0 | | 6 | 0 | | 5 | 0 | | 4 | 0 | | 3 | 5 | | 2 | 7 | | 1 | 9 | | 0 | 12 | How should I modify the query to get such result? Thanks.

    Read the article

  • compute mean in python for a generator

    - by nmaxwell
    Hi, I'm doing some statistics work, I have a (large) collection of random numbers to compute the mean of, I'd like to work with generators, because I just need to compute the mean, so I don't need to store the numbers. The problem is that numpy.mean breaks if you pass it a generator. I can write a simple function to do what I want, but I'm wondering if there's a proper, built-in way to do this? It would be nice if I could say "sum(values)/len(values)", but len doesn't work for genetators, and sum already consumed values. here's an example: import numpy def my_mean(values): n = 0 Sum = 0.0 try: while True: Sum += next(values) n += 1 except StopIteration: pass return float(Sum)/n X = [k for k in range(1,7)] Y = (k for k in range(1,7)) print numpy.mean(X) print my_mean(Y) these both give the same, correct, answer, buy my_mean doesn't work for lists, and numpy.mean doesn't work for generators. I really like the idea of working with generators, but details like this seem to spoil things. thanks for any help -nick

    Read the article

  • Best way to collect and store data daily?

    - by mktb
    I have a bunch of statistics: # of users, # of families, ratio user/family, etc. I'd like to store these daily so I can view this data historically. However, I'm looking for the most effective way to store this data. Should I run a cron job that writes to the database DATE: today USERS: 123 FAMILIES: 456 RATIO: 7.89 or whatever? (or should I write multiple rows like DATE: today DATATYPE: users VALUE: 123?) Or is there another option I can use that is more efficient or more effective? Thanks!

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • Incremental Compilation in Eclipse. ASTNode-s and SVN versioning

    - by Alex
    Hi there, I am building up some statistics after analyzing the source code in eclipse. But the overall process is too slow because i rebuild my model every time from scratch after each compilation. I am looking for a way to get only the changed parts of the code (as ASTNodes) and to rebuild just that part of my model. I suppose that even the changed compilation units and not the exact code elements would be enough after the user compiles and still would be a nice optimization. I am sure eclipse is capable of knowing what code elements are changed (and even to know their semantics), because when I use the subclipse plugin my changes are ordered by a code element (an import, a method, a variable declaration, etc). Well.. at least that plugin is capable of knowing that info. Thanks in advance

    Read the article

  • Efficient alternatives to merge for larger data.frames R

    - by Etienne Low-Décarie
    I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size1 million / 300 KB RData file) data frames. "merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system. Example load test data frame and try test.merged<-merge(test, test) or test.merged<-join(test, test, type="all") - The following post provides a list of merge and alternatives: How to join data frames in R (inner, outer, left, right)? The following allows object size inspection: https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/ Data produced by anonym

    Read the article

  • Popular open source projects using Zend Framework

    - by Alexander
    Hello, I am trying to find any open source project based on Zend Framework. Something well written and as popular as Wordpress or Drupal to see actual benefits of ZF as well as possibly use it as an example. The only 'showcase' I managed to find is http://framework.zend.com/wiki/pages/viewpage.action?pageId=14134. But this list looks confusing as for the 'official' PHP framework. The same is about ZF statistics by numbers (http://framework.zend.com/about/numbers) - 10 million downloads against 400 actual projects which is less than 500 examples in the user guide... Also Yahoo chose Symfony for their bookmarks service not ZF... Am I missing something? Thank you!

    Read the article

  • BufferedReader ready method in a while loop to determine EOF?

    - by BobTurbo
    I have a large file (wikipedia english arcticles only database as xml file) I am using to read one character at a time using BufferedReader. The psuedo code is: file = BufferedReader... while (file.ready()) character = file.read() is this actually valid? Or will ready just return false when it is waiting for the HDD to return data and not actually when the EOF has been reached? I tried to use if (file.read() == -1) but seemed to run into an infinite loop that I literally could not find. I am just wondering if it is reading the whole file as my statistics say 444 380 wikipedia pages have been read but I thought there were many more articles..

    Read the article

  • Get local network interface addresses using only proc?

    - by Matt Joiner
    How can I obtain the (IPv4) addresses for all network interfaces using only proc? After some extensive investigation I've discovered the following: ifconfig makes use of SIOCGIFADDR, which requires open sockets and advance knowledge of all the interface names. It also isn't documented in any manual pages on Linux. proc contains /proc/net/dev, but this is a list of interface statistics. proc contains /proc/net/if_inet6, which is exactly what I need but for IPv6. Generally interfaces are easy to find in proc, but actual addresses are very rarely used except where explicitly part of some connection. There's a system call called getifaddrs, which is very much a "magical" function you'd expect to see in Windows. It's also implemented on BSD. However it's not very text-oriented, which makes it difficult to use from non-C languages.

    Read the article

  • C++Math evaluating incorrectly

    - by Hayden
    I thought I can make life a little easier in data statistics by making a small program which returns the results of sampling distribution of the mean (with standard error). It does this part successfully but in an attempt to return the z-score by using the formula I found here, it returns -1#IND. My interpretation of that formula is: ((1/(sqrt(2*pi)*stdev))*pow(e, (normalpow)) where double normalpow=-0.5*((mean-popmean)*(mean-popmean)/stdev) I did a little more investigating and found that (mean-popmean)*(mean-popmean) was evaluating to 0 no matter what. How can I get around this problem of normalpow evaluating to 0.

    Read the article

  • Is this an injection attempt or a normal request?

    - by CheeseConQueso
    In cPanel's Analog Stats statistics module, I've noticed countless requests to connect to the following example: /?x=19&y=15 The numbers are random, but its always setting x and y variables. Another category of mysterious requests: /?id=http://nic.bupt.edu.cn/media/j1.txt?? There are other attempts at injections in the request log that have straight sql written into them as well. Example: /jobs/jobinfo.php?id=-999.9 UNION ALL SELECT 1,(SELECT concat(0x7e,0x27,count(table_name),0x27,0x7e) FROM information_schema.tables WHERE table_schema=0x73636363726F6F745F7075626C6963),3,4,5,6,7,8,9,10,11,12,13-- It looks like they are all reaching a 404, but I'm still wondering about the intent behind these. I know this is vague, but maybe someone knows that this is normal while using cPanel & phpMyAdmin services. Also, there was a search box installed on the site which could be the reason. Any suggestions as to what all these are?

    Read the article

  • Less Mathematical Approaches to Machine Learning?

    - by Ed
    Out of curiosity, I've been reading up a bit on the field of Machine Learning, and I'm surprised at the amount of computation and mathematics involved. One book I'm reading through uses advanced concepts such as Ring Theory and PDEs (note: the only thing I know about PDEs is that they use that funny looking character). This strikes me as odd considering that mathematics itself is a hard thing to "learn." Are there any branches of Machine Learning that use different approaches? I would think that a approaches relying more on logic, memory, construction of unfounded assumptions, and over-generalizations would be a better way to go, since that seems more like the way animals think. Animals don't (explicitly) calculate probabilities and statistics; at least as far as I know.

    Read the article

  • SAS(Statistical Analysis System) Career As a computer science student

    - by Renju
    Hi. I have completed MSc in Computer science this academic year. So I am fresher... While I am doing graduation and post graduation I did many projects using PHP and MySQL. Now I got opportunity to get into SAS(Statistical Analysis System) career, and I heard that SAS having better career growth than PHP developement. For the past 4 days, I was working with SAS and I feel interested in working. My questions are, Since i am a computer science student whether i will have any problem in my career growth in SAS? I am ready to learn statistics also, is there anything else I have to do? Doing certification in SAS will make any changes? Is it a bad idea to get into SAS with only CSc backgrond? So please guide me for my career...

    Read the article

  • Data driven charts and graphs from xml to svg

    - by garymlewis
    I asked this question a week ago, but did not do a good job of describing the problem. Here's a second attempt. I'd like to produce data-driven charts, graphs, and other data visualizations, starting with data in an xml database and ending up with the visualizations as SVG. Here's an example from the W3C. It uses Javascript to create a stacked bar chart as SVG from xml. I'd like to do something similar but use a graphics library (or ???) instead of js to handle the construction of axes, labels, titles, data points, etc. My question, then: what are the options that I should consider ... things like Raphael I suppose, but initially I'd like to cast a wide net and look at many different options. My experience is all with static data visualizations using statistics packages like R, but eventually I'd like to create interactive data visualizations with html5/css3/svg. Any help would be much appreciated. Thanks.

    Read the article

  • Copy of Access mdb database being updated by live database

    - by James
    I'm trying to compute statistics for data held in an Access .mdb database. In order to avoid interfering with the live database, I'm working from a copy which I made by simply using copy-paste in Windows Explorer. The copy resides in the same directory, but with a different name. I'm using R and RODBC to connect to the copy of the file. The strange thing is that new data that is being updated on the original live database is appearing in my queries. This is despite the file timestamps of the copy not changing at all. It is also causing some slowdown in the live database. My understanding is that the .mdb files are standalone, or is this not the case? Should I have copied the database in a different way?

    Read the article

  • How to analyse Wikipedia article's data base with R?

    - by Tal Galili
    Hi all, This is a "big" question, that I don't know how to start, so I hope some of you can give me a direction. And if this is not a "good" question, I will close the thread with an apology. I wish to go through the database of Wikipedia (let's say the English one), and do statistics. For example, I am interested in how many active editors (which should be defined) Wikipedia had at each point of time (let's say in the last 2 years). I don't know how to build such a database, how to access it, how to know which types of data it has and so on. So my questions are: What tools do I need for this (besides basic R) ? MySQL on my computer? RODBC database connection? How do you start planning for such a project?

    Read the article

  • Web and stand-alone apps development tendency

    - by Narek
    There is a strong tendency of making web apps and even seems that very soon a lot of features will be available online so that for every day use people will have all necessary software free online and they will not need to install any software locally. Only specific (professional) tools that usually people don’t use at home will not be available as a web app. So my question, how do you imagine selling software that was necessary for everyday use and was not free (seems they can't make money any more by selling their product – no need of those products). And what disadvantages have web apps, that is to say, what is bad to use software online compared with having the same software locally (please list)? Please do not consider this question not connected with programming, as I really would like to have a little statistics from professional programmers who are aware from nowday’s tendency of software and programming. Thanks.

    Read the article

  • Determining where to animate the map given a list of GeoPoints

    - by Itsik
    My android application loads some markers on an overlay onto a MapView. The markers are placed based on a dynamic list of GeoPoints. I want to move the map center and zoom into the area with most items. Naively, I can calculate the superposition of all the points, but I would like to remove the points that are very far from the mass of points from the calculation. Is there a known way to calculate this ? (e.g. probability, statistics .. ?)

    Read the article

  • Why might someone say R is *NOT* a programming language? [closed]

    - by Tal Galili
    I came by the following comment today on twitter "R is not a programming language, it's a statistics package with the GUI missing." And I am wondering - Why not? What is "missing" in R to make it a "programming language" ? Update: For the protocol, I am a big fan of R, use it daily, and support it's existence. I now changed the name of this thread from "Why is R NOT a programming language?" to "Why might someone say R is NOT a programming language?" Which better reflects my motivation for this thread (which is, to know if R has any programmatical disadvantages that I might have not heard about).

    Read the article

  • PHP image resize on the fly vs storing resized images

    - by Pablo
    I'm building a image sharing site and would like to know the pros and cons of resizing images on the fly with php and having the resized images stored. Which is faster? Which is more reliable? how big is the gap between the two methods in speed and performance? Please note that either way the images go through a PHP script for statistics like views or if hotlinking is allow etc... so is not like it will be a direct link for images if i opt to store the resize images. I'll appreciated your comments or any helpful links on the subject, Thanks.

    Read the article

  • CC.NET File merge task and dynamic values

    - by ccnet
    How can I use CCNetLabel in the file merge task? From what I have found I have to use dynamicValues. I have somethink like this and it is not working any help? <publishers> <merge> <dynamicValues> <replacementValue property="files"> <format>D:\Testoutput\{0}\*.xml</format> <parameters> <namedValue name="$CCNetLabel" value="Default" /> </parameters> </replacementValue> </dynamicValues> </merge> <xmllogger /> <modificationHistory onlyLogWhenChangesFound="true" /> <statistics /> </publishers>

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >