Search Results

Search found 2042 results on 82 pages for 'average'.

Page 67/82 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Foosball result prediction

    - by Wolf
    In our office, we regularly enjoy some rounds of foosball / table football after work. I have put together a small java program that generates random 2vs2 lineups from the available players and stores the match results in a database afterwards. The current prediction of the outcome uses a simple average of all previous match results from the 4 involved players. This gives a very rough estimation, but I'd like to replace it with something more sophisticated, taking into account things like: players may be good playing as attacker but bad as defender (or vice versa) players do well against a specific opponent / bad against others some teams work well together, others don't skills change over time What would be the best algorithm to predict the game outcome as accurately as possible? Someone suggested using a neural network for this, which sounds quite interesting... but I do not have enough knowledge on the topic to say if that could work, and I also suspect it might take too many games to be reasonably trained. EDIT: Had to take a longer break from this due to some project deadlines. To make the question more specific: Given the following mysql table containing all matches played so far: table match_result match_id int pk match_start datetime duration int (match length in seconds) blue_defense int fk to table player blue_attack int fk to table player red_defense int fk to table player red_attack int fk to table player score_blue int score_red int How would you write a function predictResult(blueDef, blueAtk, redDef, redAtk) {...} to estimate the outcome as closely as possible, executing any sql, doing calculations or using external libraries?

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

  • Long text input from user and PDF generation

    - by Petteri Hietavirta
    I have built a web application that can be seen as an overcomplicated application form. There are bunch of text areas with a given character limit. After the form submission various things happen and one of them is PDF generation. The text is queried from the DB and inserted in the PDF template created in iReports. This works fine but the major pain is overflowing text. The maximum number of characters is set based on 'average' text. But sometimes people prefer to write with CAPS or add plenty of linefeeds to format their text. These then cause user's text to overflow the space given in PDF. Unfortunately the PDF document must look like a real application form so I cannot allow unlimited space. What kind of approaches you have used to tackle this? Clean/restrict user input? Calculate the space requirement of the text based on font metrics? Provide preview of the PDF? (too bad users are not allowed to change their input after submission...)

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • Memory problems while code is running (Python, Networkx)

    - by MIN SU PARK
    I made a code for generate a graph with 379613734 edges. But the code couldn't be finished because of memory. It takes about 97% of server memory when it go through 62 million lines. So I killed it. Do you have any idea to solve this problem? My code is like this: import os, sys import time import networkx as nx G = nx.Graph() ptime = time.time() j = 1 for line in open("./US_Health_Links.txt", 'r'): #for line in open("./test_network.txt", 'r'): follower = line.strip().split()[0] followee = line.strip().split()[1] G.add_edge(follower, followee) if j%1000000 == 0: print j*1.0/1000000, "million lines done", time.time() - ptime ptime = time.time() j += 1 DG = G.to_directed() # P = nx.path_graph(DG) Nn_G = G.number_of_nodes() N_CC = nx.number_connected_components(G) LCC = nx.connected_component_subgraphs(G)[0] n_LCC = LCC.nodes() Nn_LCC = LCC.number_of_nodes() inDegree = DG.in_degree() outDegree = DG.out_degree() Density = nx.density(G) # Diameter = nx.diameter(G) # Centrality = nx.betweenness_centrality(PDG, normalized=True, weighted_edges=False) # Clustering = nx.average_clustering(G) print "number of nodes in G\t" + str(Nn_G) + '\n' + "number of CC in G\t" + str(N_CC) + '\n' + "number of nodes in LCC\t" + str(Nn_LCC) + '\n' + "Density of G\t" + str(Density) + '\n' # sys.exit() # j += 1 The edge data is like this: 1000 1001 1000245 1020191 1000 10267352 1000653 10957902 1000 11039092 1000 1118691 10346 11882 1000 1228281 1000 1247041 1000 12965332 121340 13027572 1000 13075072 1000 13183162 1000 13250162 1214 13326292 1000 13452672 1000 13844892 1000 14061830 12340 1406481 1000 14134703 1000 14216951 1000 14254402 12134 14258044 1000 14270791 1000 14278978 12134 14313332 1000 14392970 1000 14441172 1000 14497568 1000 14502775 1000 14595635 1000 14620544 1000 14632615 10234 14680596 1000 14956164 10230 14998341 112000 15132211 1000 15145450 100 15285998 1000 15288974 1000 15300187 1000 1532061 1000 15326300 Lastly, is there anybody who has an experience to analyze Twitter link data? It's quite hard to me to take a directed graph and calculate average/median indegree and outdegree of nodes. Any help or idea?

    Read the article

  • Addresses stored in SQL server have many small variations(errors)

    - by MAW74656
    I have a table in my database which stores packing slips and their information. I'm trying to query that table and get each unique address. I've come close, but I still have many near misses and I'm looking for a way to exclude these near duplicates from my select. Sample Data CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHIN 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE. CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVENUE CLEVELAND Ohio 44114 10033 UNITED DIECUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44144 10033 UNITED FINISHING 3610 HAMILTON AVE CLEVLAND Ohio 44114 10033 UNITED FINISHING & DIE CUTTING 3610 HAMILTON AVE CLEVELAND Ohio 44114 And all I want is 1 record. Is there some way I can get the "Average" record? Meaning, if most of the records say CLEVELAND instead of CLEVLAND, I want my 1 record to say CLEVELAND. Is there any way to par this data down to what I'm looking for? Desired Output CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Windows-1251 file inside UTF-8 site?

    - by Spoonk
    Hello everyone Masters Of Web Delevopment :) I have a piece of PHP script that fetches last 10 played songs from my winamp. This script is inside file (lets call it "lastplayed.php") which is included in my site with php include function inside a "div". My site is on UTF-8 encoding. The problem is that some songs titles are in Windows-1251 encoding. And in my site they displays like "??????"... Is there any known way to tell to this div with included "lastplayed.php" in it, to be with windows-1251 encoding? Or any other suggestions? P.S: The file with fetching script a.k.a. "lastplayed.php", is converted to UTF-8. But if it is ANCII it's the same result. I try to put and meta tag with windows-1251 between head tag but nothing happens again. P.P.S: Script that fetches the Winamp's data (lastplayed.php): <?php /****** * You may use and/or modify this script as long as you: * 1. Keep my name & webpage mentioned * 2. Don't use it for commercial purposes * * If you want to use this script without complying to the rules above, please contact me first at: [email protected] * * Author: Martijn Korse * Website: http://devshed.excudo.net * * Date: 08-05-2006 ***/ /** * version 2.0 */ class Radio { var $fields = array(); var $fieldsDefaults = array("Server Status", "Stream Status", "Listener Peak", "Average Listen Time", "Stream Title", "Content Type", "Stream Genre", "Stream URL", "Current Song"); var $very_first_str; var $domain, $port, $path; var $errno, $errstr; var $trackLists = array(); var $isShoutcast; var $nonShoutcastData = array( "Server Status" => "n/a", "Stream Status" => "n/a", "Listener Peak" => "n/a", "Average Listen Time" => "n/a", "Stream Title" => "n/a", "Content Type" => "n/a", "Stream Genre" => "n/a", "Stream URL" => "n/a", "Stream AIM" => "n/a", "Stream IRC" => "n/a", "Current Song" => "n/a" ); var $altServer = False; function Radio($url) { $parsed_url = parse_url($url); $this->domain = isset($parsed_url['host']) ? $parsed_url['host'] : ""; $this->port = !isset($parsed_url['port']) || empty($parsed_url['port']) ? "80" : $parsed_url['port']; $this->path = empty($parsed_url['path']) ? "/" : $parsed_url['path']; if (empty($this->domain)) { $this->domain = $this->path; $this->path = ""; } $this->setOffset("Current Stream Information"); $this->setFields(); // setting default fields $this->setTableStart("<table border=0 cellpadding=2 cellspacing=2>"); $this->setTableEnd("</table>"); } function setFields($array=False) { if (!$array) $this->fields = $this->fieldsDefaults; else $this->fields = $array; } function setOffset($string) { $this->very_first_str = $string; } function setTableStart($string) { $this->tableStart = $string; } function setTableEnd($string) { $this->tableEnd = $string; } function getHTML($page=False) { if (!$page) $page = $this->path; $contents = ""; $domain = (substr($this->domain, 0, 7) == "http://") ? substr($this->domain, 7) : $this->domain; if (@$fp = fsockopen($domain, $this->port, $this->errno, $this->errstr, 2)) { fputs($fp, "GET ".$page." HTTP/1.1\r\n". "User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)\r\n". "Accept: */*\r\n". "Host: ".$domain."\r\n\r\n"); $c = 0; while (!feof($fp) && $c <= 20) { $contents .= fgets($fp, 4096); $c++; } fclose ($fp); preg_match("/(Content-Type:)(.*)/i", $contents, $matches); if (count($matches) > 0) { $contentType = trim($matches[2]); if ($contentType == "text/html") { $this->isShoutcast = True; return $contents; } else { $this->isShoutcast = False; $htmlContent = substr($contents, 0, strpos($contents, "\r\n\r\n")); $dataStr = str_replace("\r", "\n", str_replace("\r\n", "\n", $contents)); $lines = explode("\n", $dataStr); foreach ($lines AS $line) { if ($dp = strpos($line, ":")) { $key = substr($line, 0, $dp); $value = trim(substr($line, ($dp+1))); if (preg_match("/genre/i", $key)) $this->nonShoutcastData['Stream Genre'] = $value; if (preg_match("/name/i", $key)) $this->nonShoutcastData['Stream Title'] = $value; if (preg_match("/url/i", $key)) $this->nonShoutcastData['Stream URL'] = $value; if (preg_match("/content-type/i", $key)) $this->nonShoutcastData['Content Type'] = $value; if (preg_match("/icy-br/i", $key)) $this->nonShoutcastData['Stream Status'] = "Stream is up at ".$value."kbps"; if (preg_match("/icy-notice2/i", $key)) { $this->nonShoutcastData['Server Status'] = "This is <span style=\"color: red;\">not</span> a Shoutcast server!"; if (preg_match("/ultravox/i", $value)) $this->nonShoutcastData['Server Status'] .= " But an <a href=\"http://ultravox.aol.com/\" target=\"_blank\">Ultravox</a> Server"; $this->altServer = $value; } } } return nl2br($htmlContent); } } else return $contents; } else { return False; } } function getServerInfo($display_array=null, $very_first_str=null) { if (!isset($display_array)) $display_array = $this->fields; if (!isset($very_first_str)) $very_first_str = $this->very_first_str; if ($html = $this->getHTML()) { // parsing the contents $data = array(); foreach ($display_array AS $key => $item) { if ($this->isShoutcast) { $very_first_pos = stripos($html, $very_first_str); $first_pos = stripos($html, $item, $very_first_pos); $line_start = strpos($html, "<td>", $first_pos); $line_end = strpos($html, "</td>", $line_start) + 4; $difference = $line_end - $line_start; $line = substr($html, $line_start, $difference); $data[$key] = strip_tags($line); } else { $data[$key] = $this->nonShoutcastData[$item]; } } return $data; } else { return $this->errstr." (".$this->errno.")"; } } function createHistoryArray($page) { if (!in_array($page, $this->trackLists)) { $this->trackLists[] = $page; if ($html = $this->getHTML($page)) { $fromPos = stripos($html, $this->tableStart); $toPos = stripos($html, $this->tableEnd, $fromPos); $tableData = substr($html, $fromPos, ($toPos-$fromPos)); $lines = explode("</tr><tr>", $tableData); $tracks = array(); $c = 0; foreach ($lines AS $line) { $info = explode ("</td><td>", $line); $time = trim(strip_tags($info[0])); if (substr($time, 0, 9) != "Copyright" && !preg_match("/Tag Loomis, Tom Pepper and Justin Frankel/i", $info[1])) { $this->tracks[$c]['time'] = $time; $this->tracks[$c++]['track'] = trim(strip_tags($info[1])); } } if (count($this->tracks) > 0) { unset($this->tracks[0]); if (isset($this->tracks[1])) $this->tracks[1]['track'] = str_replace("Current Song", "", $this->tracks[1]['track']); } } else { $this->tracks[0] = array("time"=>$this->errno, "track"=>$this->errstr); } } } function getHistoryArray($page="/played.html") { if (!in_array($page, $this->trackLists)) $this->createHistoryArray($page); return $this->tracks; } function getHistoryTable($page="/played.html", $trackColText=False, $class=False) { $title_utf8 = mb_convert_encoding($trackArr ,"utf-8" ,"auto"); if (!in_array($page, $this->trackLists)) $this->createHistoryArray($page); if ($trackColText) $output .= " <div class='lastplayed_top'></div> <div".($class ? " class=\"".$class."\"" : "").">"; foreach ($this->tracks AS $title_utf8) $output .= "<div style='padding:2px 0;'>".$title_utf8['track']."</div>"; $output .= "</div><div class='lastplayed_bottom'></div> <div class='lastplayed_title'>".$trackColText."</div> \n"; return $output; } } // this is needed for those with a php version < 5 // the function is copied from the user comments @ php.net (http://nl3.php.net/stripos) if (!function_exists("stripos")) { function stripos($haystack, $needle, $offset=0) { return strpos(strtoupper($haystack), strtoupper($needle), $offset); } } ?> And the calling script outside the lastplayed.php: include "lastplayed.php"; $radio = new Radio($ip.":".$port); echo $radio->getHistoryTable("/played.html", "<b>Last played:</b>", "lastplayed_content");

    Read the article

  • Automatic music rating based on listening habits

    - by marco92w
    I've created a Winamp-like music player in Delphi. Not so complex, of course. Just a simple one. But now I would like to add a more complex feature: Songs in the library should be automatically rated based on the user's listening habits. This means: The application should "understand" if the user likes a song or not. And not only whether he/she likes it but also how much. My approach so far (data which could be used): Simply measure how often a song was played per time. Start counting time when the song was added to the library so that recent songs don't have any disadvantage. Measure how long a song was played on average (minutes). Starting a song but directly change to another one should have a bad influence on the ranking since the user didn't seem to like the song. ... Could you please help me with this problem? I would just like to have some ideas. I don't need the implementation in Delphi.

    Read the article

  • How to create extensible dynamic array in Java without using pre-made classes?

    - by AndrejaKo
    Yeah, it's a homework question, so givemetehkodezplsthx! :) Anyway, here's what I need to do: I need to have a class which will have among its attributes array of objects of another class. The proper way to do this in my opinion would be to use something like LinkedList, Vector or similar. Unfortunately, last time I did that, I got fire and brimstone from my professor, because according to his belief I was using advanced stuff without understanding basics. Now next obvious solution would be to create array with fixed number of elements and add checks to get and set which will see if the array is full. If it is full, they'd create new bigger array, copy older array's data to the new array and return the new array to the caller. If it's mostly empty, they'd create new smaller array and move data from old array to new. To me this looks a bit stupid. For my homework, there probably won't be more that 3 elements in an array, but I'd like to make a scalable solution without manually calculating statistics about how often is array filled, what is the average number of new elements added, then using results of calculation to calculate number of elements in new array and so on. By the way, there is no need to remove elements from the middle of the array. Any tips?

    Read the article

  • How to access Youtube_it ruby query results?

    - by spectro
    I am trying to implement the youtube_it youtube api wrapper for ruby and have it working except I'm stumped as to how the query results should be accessed. Here is my query: client.videos_by(:query => "penguin", :max_results => 1) Submitting request [url=http://gdata.youtube.com/feeds/api/videos?max-results=1&start-index=1&vq=penguin]. => #<YouTubeIt::Response::VideoSearch:0xb6c41b14 @feed_id="http://gdata.youtube.com/feeds/api/videos", @updated_at=Wed Nov 03 18:01:39 UTC 2010, @videos=[#<YouTubeIt::Model::Video:0xb6c424d8 @thumbnails=[#<YouTubeIt::Model::Thumbnail:0xb6c6b694 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/2.jpg", @width=120, @height=90, @time="00:01:34">, #<YouTubeIt::Model::Thumbnail:0xb6c6b248 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/1.jpg", @width=120, @height=90, @time="00:00:47">, #<YouTubeIt::Model::Thumbnail:0xb6c6a988 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/3.jpg", @width=120, @height=90, @time="00:02:21">, #<YouTubeIt::Model::Thumbnail:0xb6c69e34 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/0.jpg", @width=320, @height=240, @time="00:01:34">], @categories=[#<YouTubeIt::Model::Category:0xb6ca5d6c @term="Music", @label="Music">], @noembed=false, @racy=false, @favorite_count=7862, @duration=188, @author=#<YouTubeIt::Model::Author:0xb6c9942c @name="wili", @uri="http://gdata.youtube.com/feeds/api/users/wili">, @updated_at=Tue Nov 02 08:45:25 UTC 2010, @longitude=nil, @position=nil, @view_count=1682350, @html_content="penguin", @media_content=[#<YouTubeIt::Model::Content:0xb6c770d4 @url="http://www.youtube.com/v/oSbLpQEZP1Y?f=videos&app=youtube_gdata", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d108 @name=:swf, @format_code=5>, @default=true, @mime_type="application/x-shockwave-flash">, #<YouTubeIt::Model::Content:0xb6c766d4 @url="rtsp://v5.cache3.c.youtube.com/CiILENy73wIaGQlWPxkBpcsmoRMYDSANFEgGUgZ2aWRlb3MM/0/0/0/video.3gp", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d11c @name=:rtsp, @format_code=1>, @default=false, @mime_type="video/3gpp">, #<YouTubeIt::Model::Content:0xb6c75d38 @url="rtsp://v8.cache3.c.youtube.com/CiILENy73wIaGQlWPxkBpcsmoRMYESARFEgGUgZ2aWRlb3MM/0/0/0/video.3gp", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d0f4 @name=:three_gpp, @format_code=6>, @default=false, @mime_type="video/3gpp">], @description="penguin", @latitude=nil, @title="penguin", @published_at=Mon May 08 18:11:01 UTC 2006, @player_url="http://www.youtube.com/watch?v=oSbLpQEZP1Y&feature=youtube_gdata_player", @rating=#<YouTubeIt::Model::Rating:0xb6c5eb4c @min=1, @max=5, @average=4.676985, @rater_count=2746>, @keywords=["pigloo", "penguin"], @video_id="http://gdata.youtube.com/feeds/api/videos/oSbLpQEZP1Y", @where=nil>], @total_result_count=291282, @offset=1, @max_result_count=1> I would like to retrieve the URL and thumbnail links. Any ideas?

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • I can't figure out why this fps counter is inaccurate.

    - by rmetzger
    I'm trying to track frames per second in my game. I don't want the fps to show as an average. I want to see how the frame rate is affected when I push keys and add models etc. So I am using a variable to store the current time and previous time, and when they differ by 1 second, then I update the fps. My problem is that it is showing around 33fps but when I move the mouse around really fast, the fps jumps up to 49fps. Other times, if I change a simple line of code elsewhere not related to the frame counter, or close the project and open it later, the fps will be around 60. Vsync is on so I can't tell if the mouse is still effecting the fps. Here is my code which is in an update function that happens every frame: FrameCount++; currentTime = timeGetTime (); static unsigned long prevTime = currentTime; TimeDelta = (currentTime - prevTime) / 1000; if (TimeDelta > 1.0f) { fps = FrameCount / TimeDelta; prevTime = currentTime; FrameCount = 0; TimeDelta = 0; } Here are the variable declarations: int FrameCount; double fps, currentTime, prevTime, TimeDelta, TimeElapsed; Please let me know what is wrong here and how to fix it, or if you have a better way to count fps. Thanks!!!!!! I am using DirectX 9 btw but I doubt that is relevant, and I am using PeekMessage. Should I be using an if else statement instead? Here is my message processing loop: MSG msg; ZeroMemory (&msg, sizeof (MSG)); while (msg.message != WM_QUIT) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage (&msg); DispatchMessage (&msg); } Update (); RenderFrame (); }

    Read the article

  • Python and a "time value of money" problem.

    - by jamieb
    (I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.

    Read the article

  • Can a C# method chain be "too long"?

    - by ccornet
    Not in terms of readability, naturally, since you can always arrange the separate methods into separate lines. Rather, is it dangerous, for any reason, to chain an excessively large number of methods together? I use method chaining primarily to save space on declaring individual one-use variables, and traditionally using return methods instead of methods that modify the caller. Except for string methods, those I kinda chain mercilessly. In any case, I worry sometimes about the impact of using exceptionally long method chains all in one line. Let's say I need to update the value of one item based on someone's username. Unfortunately, the shortest method to retrieve the correct user looks something like the following. SPWeb web = GetWorkflowWeb(); SPList list2 = web.Lists["Wars"]; SPListItem item2 = list2.GetItemById(3); SPListItem item3 = item2.GetItemFromLookup("Armies", "Allied Army"); SPUser user2 = item2.GetSPUser("Commander"); SPUser user3 = user2.GetAssociate("Spouse"); string username2 = user3.Name; item1["Contact"] = username2; Everything with a 2 or 3 lasts for only one call, so I might condense it as the following (which also lets me get rid of a would-be-superfluous 1): SPWeb web = GetWorkflowWeb(); item["Contact"] = web.Lists["Armies"] .GetItemById(3) .GetItemFromLookup("Armies", "Allied Army") .GetSPUser("Commander") .GetAssociate("Spouse") .Name; Admittedly, it looks a lot longer when it is all in one line and when you have int.Parse(ddlArmy.SelectedValue.CutBefore(";#", false)) instead of 3. Nevertheless, this is one of the average lengths of these chains, and I can easily foresee some of exceptionally longer counts. Excluding readability, is there anything I should be worried about for these 10+ method chains? Or is there no harm in using really really long method chains?

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • Helpful advice on developing a professional MS Word add-on

    - by Dan Tao
    A few months back I put together a simple proof-of-concept piece of software for a small firm with an idea for a document editing tool. The company wanted this tool to be integrated into Microsoft Word, understandably, to maximize its accessibility to the average user. I essentially wrote the underlying library with all of the core functionality as a C# project, and then used VSTO to get it running inside of Word. It felt like a bit of a duct tape solution, really; but then, I have (practically) zero experience developing tools for integration with MS Office, and it was only a proof of concept anyway. Well, the firm was quite pleased with my work overall, and they're looking to move from "proof of concept" to the real deal. Fortunately, as I said, the core functionality is all there and will only need to be somewhat tweaked and enhanced. My main concern is figuring out how to put together an application that will integrate with MS Word in a clean and polished way, and which can be deployed easily in accordance with a regular user's expectations (i.e., simply running an install program and voila, it's there in Word). I seem to remember reading somewhere that nobody uses VSTO for real professional projects. Is this true? False? What are the alternatives? And what are the tips and gotchas that I should be aware of before getting started on this issue of MS Word integration?

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • How to speed up an already cached pip install?

    - by Maxime R.
    I frequently have to re-create virtual environments from a requirements.txt and I am already using $PIP_DOWNLOAD_CACHE. It still takes a lot of time and I noticed the following: Pip spends a lot of time between the following two lines: Downloading/unpacking SomePackage==1.4 (from -r requirements.txt (line 2)) Using download cache from $HOME/.pip_download_cache/cached_package.tar.gz Like ~20 seconds on average to decide it's going to use the cached package, then the install is fast. This is a lot of time when you have to install dozens of packages (actually enough to write this question). What is going on in the background? Are they some sort of integrity checks against the online package? Is there a way to speed this up? edit: Looking at: time pip install -v Django==1.4 I get: real 1m16.120s user 0m4.312s sys 0m1.280s The full output is here http://pastebin.com/e4Q2B5BA. Looks like pip is spending his time looking for a valid download link while it already has a valid cache of http://pypi.python.org/packages/source/D/Django/Django-1.4.tar.gz. Is there a way to look for the cache first and stop there if versions match?

    Read the article

  • I can't read the value from a radio button.

    - by Corey
    <html> <head> <title>Tip Calculator</title> <script type="text/javascript"><!-- function calculateBill(){ var check = document.getElementById("check").value; /* I try to get the value selected */ var tipPercent = document.getElementById("tipPercent").value; /* But it always returns the value 15 */ var tip = check * (tipPercent / 100) var bill = 1 * check + tip; document.getElementById('bill').innerHTML = bill; } --></script> </head> <body> <h1 style="text-align:center">Tip Calculator</h1> <form id="f1" name="f1"> Average Sevice: 15%<input type="radio" id="tipPercent" name="tipPercent" value="15" /> <br /> Excellent Sevice: 20%<input type="radio" id="tipPercent" name="tipPercent" value="20" /> <br /><br /> <label>Check Amount</label> <input type="text" id="check" size="10" /> <input type="button" onclick="calculateBill()" value="Calculate" /> </form> <br /> Total Bill: <p id="bill"></p> </body> </html>

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >