Search Results

Search found 2156 results on 87 pages for 'weighted average'.

Page 37/87 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Mysql Avg function for recent 15 records by date (order date desc) in every symbol

    - by venkatesh
    i am trying to create a statement in sql (for a table which holds stock symbols and price on specified date) with avg of 5 day price and avg of 15 days price for each symbol. table description: symbol open high close date the average price is calculated from last 5 days and last 15 days. i tried this for getting 1 symbol: SELECT avg(close), avg(`trd_qty`) FROM (select * from cashmarket WHERE symbol = \'hdil\' order by `M_day` desc limit 0,15 ) s ...but I couldn't get the desired the list for showing avg values for all symbols.

    Read the article

  • Scrum backlog sizing is taking forever

    - by zachary
    I work on a huge project. While we program we end up meeting for endless backlog sizing sessions where all the developers sit down with the team and size user stories. Scrum doubters are saying that this process is taking too long and development time is being wasted. My question is how long should it take to size a user story on average? And does anyone have any tips to make these sizing sessions go quicker?

    Read the article

  • A good UI design for rating in .Net

    - by Ben
    Hi, im trying to add a "rating" system to an existing form (i.e 1 star, 2 Star or poor, average,good,excellent etc). Does anyone know of a way to achieve this that is aesthetically pleasing with good UX either in .Net or a free 3rd party control? Thanks

    Read the article

  • Android dev time vs iPhone dev time

    - by Daniel Benedykt
    Hi IF someone has to develop the same application for Android and iPhone, is it more difficult to develop in one platform than on the other? Does it take more time? Lets think about the average app. Lists, text , buttons, fetch information from the internet. Thanks

    Read the article

  • How efficient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • DataSet size best practices - are there any general rules?

    - by Galwegian
    I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database. Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine? Thanks for any help.

    Read the article

  • python how to find the median of a list

    - by user3450574
    I'm trying to write a function named median that takes a list as an input and returns the median value of the list. I'm working with Python 2.7.2 The list can be of any size and the numbers are not guaranteed to be in any particular order. If the list contains an even number of elements, the function should return the average of the middle two. This is the code I'm starting with: def median(list): print(median([7,12,3,1,6,9]))

    Read the article

  • where can I get these kind of exercises to solve?

    - by flash
    Recently I did a Java programming exercise successfully which was sent by a recruiting firm, The problem statement goes like this 'There are two text files FI(records abt files and directory information) and FS(containing blocks of data) which represent a file Index and file System respectively and I was supposed to write a static read method in a class which will read the file from the FS depending upon the path string provided using FI' My question is where can I get these kind of exercises to solve, the complexity should be above average to tough.

    Read the article

  • merging and manupulating files in matlab

    - by Paul
    Is there a way to run a loop through a folder and process like 30 files for a month and give the average,max of each columns and write in one excel sheet or so?? I have 30 files of size [43200 x 30] I ran a different matlab scrip to generate them so the names are easy File_2010_04_01.xls , File_2010_04_02.xls ..... and so on I cannot merge them as each are 20mbs and matlab would crash. Any ideas? Thanks

    Read the article

  • Sun's JVM instruction speed table

    - by Pindatjuh
    Is there a benchmark available how much relative time each instruction costs in a single-thread, average-case scenario (either with or without JIT compiler), for the JVM (any version) by Sun? If there is not a benchmark already available, how can I get this information? E.g.: TIME iload_1 1 iadd 12 getfield 40 etc. Where getfield is equivalent to 40 iload_1 instructions.

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • is it ethical to attend interview for the purpose of self evaluation?

    - by user49767
    I wonder, if it is ethical to attend interview for the purpose of self evaluation? Sometime I suspect that I am below average to my experience (but certainly not worst).And I keep reading books, do code almost everyday. But in order to understand What it takes to be a good developer and find better job when need arises, Can you guys suggest to attend interview for just self evaluation. is it ethical? Kindly share your thoughts.

    Read the article

  • SQL Complex Select - Trouble forming query

    - by JoshSpacher
    I have three tables, Customers, Sales and Products. Sales links a CustomerID with a ProductID and has a SalesPrice. select Products.Category, AVG(SalePrice) from Sales inner join Products on Products.ProductID = Sales.ProductID group by Products.Category This lets me see the average price for all sales by category. However, I only want to include customers that have more than 3 sales records or more in the DB. I am not sure the best way, or any way, to go about this. Ideas?

    Read the article

  • An algorithm to find common edits

    - by Tass
    I've got two word lists, an example: list 1 list 2 foot fuut barj kijo foio fuau fuim fuami kwim kwami lnun lnun kizm kazm I'd like to find o ? u # 1 and 3 i ? a # 3 and 7 im ? ami # 4 and 5 This should be ordered by amount of occurrences, so I can filter the ones that don't appear often. The lists currently consist of 35k words, the calculation should take about 6h on an average server.

    Read the article

  • 100k+ Records and sp_xml_preparedocument

    - by Jonn
    I've been encountering a seeming deadlock with one of my tables and the only place I can trace it back to is a stored procedure that uses sp_xml_preparedocument on a list of data. The data inserted, btw, consists of a 100k+ records on average. Is it possible that it is causing the deadlock? What other pitfalls does using sp_xml_preparedocument have?

    Read the article

  • Should I HttpCombine Google Jquery Hosted File?

    - by chobo2
    Hi I am using something called HttpCombiner: http://code.msdn.microsoft.com/HttpCombiner An HTTP handler that combines multiple CSS, Javascript or URL into one response for faster page load. It can combine, compress and cache response which results in faster page load and better scalability of web application It's a good practice to use many small Javascript and CSS files instead of one large Javascript/CSS file for better code maintainability, but bad in terms of website performance. Although you should write your Javascript code in small files and break large CSS files into small chunks but when browser requests those javascript and css files, it makes one Http request per file. Every Http Request results in a network roundtrip form your browser to the server and the delay in reaching the server and coming back to the browser is called latency. So, if you have four javascripts and three css files loaded by a page, you are wasting time in seven network roundtrips. Within USA, latency is average 70ms. So, you waste 7x70 = 490ms, about half a second of delay. Outside USA, average latency is around 200ms. So, that means 1400ms of waiting. Browser cannot show the page properly until Css and Javascripts are fully loaded. So, the more latency you have, the slower page loads. You can reduce the wait time by using a CDN. Read my previous blog post about using CDN. However, a better solution is to deliver multiple files over one request using an HttpHandler that combines several files and delivers as one output. So, instead of putting many or tag, you just put one and one tag, and point them to the HttpHandler. You tell the handler which files to combine and it delivers those files in one response. This saves browser from making many requests and eliminates the latency. This Http Handler reads the file names defined in a configuration and combines all those files and delivers as one response. It delivers the response as gzip compressed to save bandwidth. Moreover, it generates proper cache header to cache the response in browser cache, so that, browser does not request it again on future visit. Now I am wondering since it can handle adding links should I put in it the jquery file? The reason I am not sure is if it gets combined with my other files I think I might close the advantages of it being hosted on googles servers such as caching(my thinking is if it gets combined it will look different so even if a user has it in it's cache I am not sure if it will use the one for the cahce or not). So should I combine it or only the finals that I am using locally?

    Read the article

  • Database model for storing expressions and their occurrence in text

    - by lisak
    Hey, I'm doing a statistical research application. I need to store words according to 2 initial letters which is 676 combinations and each word has its number of occurrences (minimal, maximal, average) in text. I'm not sure how the model/schema should look like. There will be a lot of checking whether the keyword was already persisted. I appreciate your suggestions. Edit: I'll be using either mysql or postgresql + spring templates

    Read the article

  • Have 2 separate tables or an additional field in 1 table?

    - by hkansal
    Hello, I am making a small personal application regarding my trade of shares of various companies. The actions can be selling shares of a company or buying. Therefore, the details to be saved in both cases would be: Number of Shares Average Price Would it be better to use a separate tables for "buy" and "sell" or just use one table for "trade" and keep a field that demarcates "buy" from "sell"?

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • What's the (memory) footprint of a J2EE servlet?

    - by Amr Mostafa
    For Jetty, Tomcat, or any other servlet container of your choice, what's the average footprint (memory, and any other notable resources) of a basic servlet? This includes any other basic objects that you almost always need per servlet, such as a view resolver. I'm not looking for a quantitative number in particular, but any indicative answer that could give an idea of how "heavy" or "lightweight" a servlet is. Thanks in advance

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >