Search Results

Search found 480 results on 20 pages for 'estimate'.

Page 13/20 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What would the conditional statement be to filter these inputs?

    - by dmanexe
    I have a page with a form, and on the form are a bunch of input check boxes. In the following page, there's the following code to process the inputs from the page before (which are set as an ID). <? $field = $this->input->post('measure',true); $totals = array(); foreach($field as $value): $query = $this->db->get_where('items', array('id' => $value['input']))->row(); $totals[] = $query->price; ?> #HTML for displaying estimate output here <?php endforeach; ?> How would I have the loop run conditionally only if there was a check on the input on the page before?

    Read the article

  • count of distinct acyclic paths from A[a,b] to A[c,d]?

    - by Sorush Rabiee
    I'm writing a sokoban solver for fun and practice, it uses a simple algorithm (something like BFS with a bit of difference). now i want to estimate its running time ( O and omega). but need to know how to calculate count of acyclic paths from a vertex to another in a network. actually I want an expression that calculates count of valid paths, between two vertices of a m*n matrix of vertices. a valid path: visits each vertex 0 or one times. have no circuits for example this is a valid path: but this is not: What is needed is a method to find count of all acyclic paths between the two vertices a and b. comments on solving methods and tricks are welcomed.

    Read the article

  • What should a Java/SOA developer be able to do?

    - by Regular Joe
    I got assigned the task to list the activities a Java Developer should be able to perform and create an estimate about the time it would take. I've came up with the following: Create JDBC CRUD backend ( S=1d, M=5d, H=10d ) Create JSP/Servlet frontend for a CRUD app ( S=1d, M=10d, H=20d ) Create Swing desktop frontend ( S=1d, M=15d, H=30d) Create ORM based CRUD etc. Create Webapp fronend with webframework etc Where.. S = Small complexity M = Medium complexity H = High complexity 1d = 1 day This is thought for a Java "enterprise" developer. The other profile I have is SOA Developer, but I could not pass beyond: Create webservice ( S=.5d, M=2d, H=7d ) Q.- What other activities should a Java Developer be able to do? Q.- What activities should a SOA Developer be able to do? Please, help me with this, I know this is in the limit of the kind of questions that could be asked here, but I really need a little push on this, and I don't want to go to Yahoo Answers for this.

    Read the article

  • Get status of servlet request before the response is returned

    - by Alex
    Good evening, I am in the process of writing a Java Servlet (Struts 2, Tomcat, JSP etc) which is capable of doing some fairly complex simulations. These can take up to 2 minutes to complete on the and will return a graph of the results. It is trivial to calculate the percentage of the simulation completed because the process works by repeating the same calculations 1000s of times. I would be interested to know if anyone has ever tried to use client side technology to provide any estimate of the percentage complete. I.e query the servlet processing to get the number of cycles completed at various point throughout the simulation. This could then be displayed as a bar in the client browser. Any thoughts, advice, resources would be much appreciated. Thanks, Alex

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Content-Length header not returned from Pylons response

    - by Evgeny
    I'm still struggling to Stream a file to the HTTP response in Pylons. In addition to the original problem, I'm finding that I cannot return the Content-Length header, so that for large files the client cannot estimate how long the download will take. I've tried response.content_length = 12345 and I've tried response.headers['Content-Length'] = 12345 In both cases the HTTP response (viewed in Fiddler) simply does not contain the Content-Length header. How do I get Pylons to return this header? (Oh, and if you have any ideas on making it stream the file please reply to the original question - I'm all out of ideas there.)

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • More interactive ZODB packing

    - by Mikko Ohtamaa
    Hi, Current ZMI management "Pack database" functionality is little rough. 1) Could it be possible to have some kind of progress indicator for web UI? E.g. one telling how many minutes/hours are left giving at least some kind of estimate 2) How does ZODB packing affect the responsivity of the site? Are all transactions blocked? 3) Any command line scripts with progress indicator available so you could do this from a ZEO command line client? 4) At least some kind of log markers to logout output... [INFO] 30% done... 3:15 to go

    Read the article

  • matplotlib.pyplot, preserve aspect ratio of the plot

    - by Headcrab
    Assuming we have a polygon coordinates as polygon = [(x1, y1), (x2, y2), ...], the following code displays the polygon: import matplotlib.pyplot as plt plt.fill(*zip(*polygon)) plt.show() By default it is trying to adjust the aspect ratio so that the polygon (or whatever other diagram) fits inside the window, and automatically changing it so that it fits even after resizing. Which is great in many cases, except when you are trying to estimate visually if the image is distorted. How to fix the aspect ratio to be strictly 1:1? (Not sure if "aspect ratio" is the right term here, so in case it is not - I need both X and Y axes to have 1:1 scale, so that (0, 1) on both X and Y takes an exact same amount of screen space. And I need to keep it 1:1 no matter how I resize the window.)

    Read the article

  • Better algorithm for estimating download time

    - by Scott Smith
    We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded. To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying... I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values. Does anyone know?

    Read the article

  • How to explain to a client that you've gone over-budget and you'll need more money/time to deliver w

    - by General Tapioca
    My situation is that I have agreed on a per-project proposal with the client. The proposal is vague, but still names functionality in a way that can be argued as to whether it's included or not, while leaving some room for interpretation. I originally pressed as much as I could to get a per-month contract, arguing that the project is mostly non-predictable, but the client refused. Being a small company, I had to fold and signed a contract on an estimate based on my group's estimations. At this point we have reached completion on about 85% of the features (we think) but we ran out of budget. We have been working for almost two years with this client in previous contracts, and we have delivered a good product that they are happy with, so we have a good standing relationship. More info: -There has been a bit of scope-creep, but I don't think enough for me to hide behind that argument -We've been delivering partial releases about monthly. -We don't have systematic user-testing in place.

    Read the article

  • Is Work Stealing always the most appropriate user-level thread scheduling algorithm?

    - by Il-Bhima
    I've been investigating different scheduling algorithms for a thread pool I am implementing. Due to the nature of the problem I am solving I can assume that the tasks being run in parallel are independent and do not spawn any new tasks. The tasks can be of varying sizes. I went immediately for the most popular scheduling algorithm "work stealing" using lock-free deques for the local job queues, and I am relatively happy with this approach. However I'm wondering whether there are any common cases where work-stealing is not the best approach. For this particular problem I have a good estimate of the size of each individual task. Work-stealing does not make use of this information and I'm wondering if there is any scheduler which will give better load-balancing than work-stealing with this information (obviously with the same efficiency). NB. This question ties up with a previous question.

    Read the article

  • What do you do before starting on a project?

    - by hahuang65
    I'm still a pretty new project, and I haven't really worked on any large projects yet. However a few projects for school has shown me something I have never really thought of before. Pre-Project planning. One project we ran into a huge problem at the very last minute, and the other project was not divided up between partners very evenly, such that all the work was actually done at the end. So my question to everyone here is: How do you plan out the project beforehand? Please try to cover the following: Design (draw out UI by hand, UMLs, etc.) Division of Labor Timeline (especially how you estimate how much time is needed for certain things) and anything else you can think of. Thanks for all the help!

    Read the article

  • Using XCode and instruments to improve iPhone app performance

    - by MrDatabase
    I've been experimenting with Instruments off and on for a while and and I still can't do the following (with any sensible results): determine or estimate the average runtime of a function that's called many times. For example if I'm driving my gameLoop at 60 Hz with a CADisplayLink I'd like to see how long the loop takes to run on average... 10 ms? 30 ms etc. I've come close with the "CPU activity" instrument but the results are inconsistent or don't make sense. The time profiler seems promising but all I can get is "% of runtime"... and I'd like an actual runtime.

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

  • Best scaling methodologies for a highly traffic web application?

    - by tester2001
    We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month. Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP? Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.

    Read the article

  • Tracing\profiling instructions

    - by LeChuck2k
    Hi Y'all. I'd like to statistically profile my C code at the instruction level. I need to know how many additions, multiplications, devides, etc,... I'm performing. This is not your usual run of the mill code profiling requirement. I'm an algorithm developer and I want to estimate the cost of converting my code to hardware implementations. For this, I'm being asked the instruction call breakdown during run-time (parsing the compiled assembly isn't sufficient as it doesn't consider loops in the code). After looking around, It seems VMWare may offer a possible solution, but I still couldn't find the specific feature that will allow me to trace the instruction call stream of my process. Are you aware of any profiling tools which enable this?

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • How can the last command's wall time be put in the Bash prompt?

    - by Mr Fooz
    Is there a way to embed the last command's elapsed wall time in a Bash prompt? I'm hoping for something that would look like this: [last: 0s][/my/dir]$ sleep 10 [last: 10s][/my/dir]$ Background I often run long data-crunching jobs and it's useful to know how long they've taken so I can estimate how long it will take for future jobs. For very regular tasks, I go ahead and record this information rigorously using appropriate logging techniques. For less-formal tasks, I'll just prepend the command with time. It would be nice to automatically "time" every single interactive command and have the timing information printed in a few characters rather than 3 lines.

    Read the article

  • Get label height for fixed width

    - by Jonas
    Is there any way I can get the height of a label, if it hypothetically had a certain width? I've been trying with control.GetPreferredSize(size) like so: Dim wantedWidth as Integer = 100 dim ctrlSize as Size = label.GetPreferredSize(new Size(wantedWidth, 0)) because I thought that setting height = 0, would indicate a free height, but the height I get is way too small. I also tried to estimate the height of the label, using Graphics.MeasureString to calculate the equivalent area of the Label dim prefWidth as Integer = 100 dim estSize as SizeF = g.MeasureString(label.Text, label.Font) dim estHeight as Integer = (Integer)(estSize.Width * estSize.Height / prefWidth) but that yields the same result. Any ideas? I'm on .NET 2.0, unfortunately.

    Read the article

  • BCB: how to get the (approximate) width of a character in a given TFont?

    - by mawg
    It's a TMemo, not that that should make any difference. Googling suggests that I can use Canvas->TextWidth() but those are Delphi examples and BCB doesn't seem to offer this property. I really want something analogous to memo->Font->Height for width. I realize that not all fonts are fixed width, so a good estimate will do. All that I need is to take the width of a TMemo in pixels and make a reasonable guess at how many characters of the current font it will hold. Of course, if I really want to be lazy, I can just google for the average height/width ratio, since height is known. Remember, an approximation is good enough for me if it is tricky to get exact. http://www.plainlanguagenetwork.org/type/utbo211.htm says, " A width to height ratio of 3:5 (0.6) is recommended for most applications"

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Statistics Question: Kernel Smoothing in R

    - by James Thompson
    I have data of this form: x y 1 0.19 2 0.26 3 0.40 4 0.58 5 0.59 6 1.24 7 0.68 8 0.60 9 1.12 10 0.80 11 1.20 12 1.17 13 0.39 I'm currently plotting a kernel-smoothed density estimate of the x versus y using this code: smoothed = ksmooth( d$resi, d$score, bandwidth = 6 ) plot( smoothed ) I simply want a plot of the x versus smoothed(y) values, which is ## Heading ## However, the documentation for ksmooth suggests that this isn't the best kernel-smoothing package available: This function is implemented purely for compatibility with S, although it is nowhere near as slow as the S function. Better kernel smoothers are available in other packages. What other kernel smoothers are better can these smoothers be found?

    Read the article

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >