Search Results

Search found 2156 results on 87 pages for 'weighted average'.

Page 72/87 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Can a C# method chain be "too long"?

    - by ccornet
    Not in terms of readability, naturally, since you can always arrange the separate methods into separate lines. Rather, is it dangerous, for any reason, to chain an excessively large number of methods together? I use method chaining primarily to save space on declaring individual one-use variables, and traditionally using return methods instead of methods that modify the caller. Except for string methods, those I kinda chain mercilessly. In any case, I worry sometimes about the impact of using exceptionally long method chains all in one line. Let's say I need to update the value of one item based on someone's username. Unfortunately, the shortest method to retrieve the correct user looks something like the following. SPWeb web = GetWorkflowWeb(); SPList list2 = web.Lists["Wars"]; SPListItem item2 = list2.GetItemById(3); SPListItem item3 = item2.GetItemFromLookup("Armies", "Allied Army"); SPUser user2 = item2.GetSPUser("Commander"); SPUser user3 = user2.GetAssociate("Spouse"); string username2 = user3.Name; item1["Contact"] = username2; Everything with a 2 or 3 lasts for only one call, so I might condense it as the following (which also lets me get rid of a would-be-superfluous 1): SPWeb web = GetWorkflowWeb(); item["Contact"] = web.Lists["Armies"] .GetItemById(3) .GetItemFromLookup("Armies", "Allied Army") .GetSPUser("Commander") .GetAssociate("Spouse") .Name; Admittedly, it looks a lot longer when it is all in one line and when you have int.Parse(ddlArmy.SelectedValue.CutBefore(";#", false)) instead of 3. Nevertheless, this is one of the average lengths of these chains, and I can easily foresee some of exceptionally longer counts. Excluding readability, is there anything I should be worried about for these 10+ method chains? Or is there no harm in using really really long method chains?

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • How to speed up an already cached pip install?

    - by Maxime R.
    I frequently have to re-create virtual environments from a requirements.txt and I am already using $PIP_DOWNLOAD_CACHE. It still takes a lot of time and I noticed the following: Pip spends a lot of time between the following two lines: Downloading/unpacking SomePackage==1.4 (from -r requirements.txt (line 2)) Using download cache from $HOME/.pip_download_cache/cached_package.tar.gz Like ~20 seconds on average to decide it's going to use the cached package, then the install is fast. This is a lot of time when you have to install dozens of packages (actually enough to write this question). What is going on in the background? Are they some sort of integrity checks against the online package? Is there a way to speed this up? edit: Looking at: time pip install -v Django==1.4 I get: real 1m16.120s user 0m4.312s sys 0m1.280s The full output is here http://pastebin.com/e4Q2B5BA. Looks like pip is spending his time looking for a valid download link while it already has a valid cache of http://pypi.python.org/packages/source/D/Django/Django-1.4.tar.gz. Is there a way to look for the cache first and stop there if versions match?

    Read the article

  • I can't read the value from a radio button.

    - by Corey
    <html> <head> <title>Tip Calculator</title> <script type="text/javascript"><!-- function calculateBill(){ var check = document.getElementById("check").value; /* I try to get the value selected */ var tipPercent = document.getElementById("tipPercent").value; /* But it always returns the value 15 */ var tip = check * (tipPercent / 100) var bill = 1 * check + tip; document.getElementById('bill').innerHTML = bill; } --></script> </head> <body> <h1 style="text-align:center">Tip Calculator</h1> <form id="f1" name="f1"> Average Sevice: 15%<input type="radio" id="tipPercent" name="tipPercent" value="15" /> <br /> Excellent Sevice: 20%<input type="radio" id="tipPercent" name="tipPercent" value="20" /> <br /><br /> <label>Check Amount</label> <input type="text" id="check" size="10" /> <input type="button" onclick="calculateBill()" value="Calculate" /> </form> <br /> Total Bill: <p id="bill"></p> </body> </html>

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

  • Time complexity to fill hash table (homework)?

    - by Heathcliff
    This is a homework question, but I think there's something missing from it. It asks: Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum. And then Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table. If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I suppossed to know how long it will take for that hash function to fill the table? Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed? As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct? Thanks

    Read the article

  • How can I read the value of a radio button in JavaScript?

    - by Corey
    <html> <head> <title>Tip Calculator</title> <script type="text/javascript"><!-- function calculateBill(){ var check = document.getElementById("check").value; /* I try to get the value selected */ var tipPercent = document.getElementById("tipPercent").value; /* But it always returns the value 15 */ var tip = check * (tipPercent / 100) var bill = 1 * check + tip; document.getElementById('bill').innerHTML = bill; } --></script> </head> <body> <h1 style="text-align:center">Tip Calculator</h1> <form id="f1" name="f1"> Average Service: 15% <input type="radio" id="tipPercent" name="tipPercent" value="15" /> <br /> Excellent Service: 20% <input type="radio" id="tipPercent" name="tipPercent" value="20" /> <br /><br /> <label>Check Amount</label> <input type="text" id="check" size="10" /> <input type="button" onclick="calculateBill()" value="Calculate" /> </form> <br /> Total Bill: <p id="bill"></p> </body> </html> I try to get the value selected with document.getElementById("tipPercent").value, but it always returns the value 15.

    Read the article

  • multi-shop orders table and sequential order numbers based on shop

    - by imanc
    Hey, I am looking at building a shop solution that needs to be scalable. Currently it retrieves 1-2000 orders on average per day across multiple country based shops (e.g. uk, us, de, dk, es etc.) but this order could be 10x this amount in two years. I am looking at either using separate country-shop databases to store the orders tables, or looking to combine all into one order table. If all orders exist in one table with a global ID (auto num) and country ID (e.g uk,de,dk etc.), each countries orders would also need to have sequential ordering. So in essence, we'd have to have a global ID and a country order ID, with the country order ID being sequential for countries only, e.g. global ID = 1000, country = UK, country order ID = 1000 global ID = 1001, country = DE, country order ID = 1000 global ID = 1002, country = DE, country order ID = 1001 global ID = 1003, country = DE, country order ID = 1002 global ID = 1004, country = UK, country order ID = 1001 THe global ID would be DB generated and not something I would need to worry about. But I am thinking that I'd have to do a query to get the current country order based ID+1 to find the next sequential number. Two things concern me about this: 1) query times when the table has potentially millions of rows of data and I'm doing a read before a write, 2) the potential for ID number clashes due to simultaneous writes/reads. With a MyISAM table the entire table could be locked whilst the last country order + 1 is retrieved, to prevent ID number clashes. I am wondering if anyone knows of a more elegant solution? Cheers, imanc

    Read the article

  • Why do C# containers and GUI classes use int and not uint for size related members ?

    - by smerlin
    I usually program in C++, but for school i have to do a project in C#. So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following: const uint size = 10; ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case. Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size). If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ? Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • python tkinter gui

    - by Lewis Townsend
    I'm wanting to make a small python program for yearly temperatures. I can get nearly everything working in the standard console but I'm wanting to implement it into a GUI. The program opens a csv file reads it into lists, works out the average, and min & max temps. Then on closing the application will save a summary to a new text file. I am wanting the default start up screen to show All Years. When a button is clicked it just shows that year's data. Here is a what I want it to look like. Pretty simple layout with just the 5 buttons and the out puts for each. I can make up the buttons for the top fine with: Code: class App: def __init__(self, master): frame = Frame(master) frame.pack() self.hi_there = Button(frame, text="All Years", command=self.All) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2011", command=self.Y1) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2012", command=self.Y2) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2013", command=self.Y3) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="Save & Exit", command=self.Exit) self.hi_there.pack(side=LEFT) I'm not sure as to how to make the other elements, such as the title & table. I was going to post the code of the small program but decided not to. Once I have the structure/framework I think I can populate the fields & I might learn better this way. Using Python 2.7.3

    Read the article

  • What could cause these Apache crash errors ?

    - by jacobanderssen
    Hello guys. I had a server crash several days ago. I use Cacti to keep stats: at the time when the server crashed, a huge spike from Load 1 to Load 200 occurred, with over 800 processes in the run queue ( from 300 average). Upon checking /var/log/httpd I notice this: * glibc detected /usr/sbin/httpd: double free or corruption (out): 0x00002b8f3142c2f0 ** Followed by alot of these: [Sat Mar 13 19:20:20 2010] [warn] child process 3090 still did not exit, sending a SIGTERM [Sat Mar 13 19:20:20 2010] [warn] child process 3091 still did not exit, sending a SIGTERM Followed by this: ======= Backtrace: ========= /lib64/libc.so.6[0x2b8f1463c2ef] /lib64/libc.so.6(cfree+0x4b)[0x2b8f1463c73b] /usr/lib64/libapr-1.so.0(apr_pool_destroy+0x131)[0x2b8f13f98821] /usr/sbin/httpd[0x2b8f126df47e] /usr/sbin/httpd[0x2b8f126df4ab] /lib64/libpthread.so.0[0x2b8f141b87c0] /etc/httpd/modules/mod_file_cache.so[0x2b8f1cdf00fb] ======= Memory map: ======== And finally a lot of these: [Sat Mar 13 19:20:27 2010] [error] could not make child process 733 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 24560 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 31384 exit, attempting to continue anyway I am also noticing one or two lines like this: [Mon Mar 15 01:17:26 2010] [notice] child pid 20765 exit signal Segmentation fault (11) Please help me shed some light on this. Thanks !

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • algorithm - How to sort a 0/1 array with 2n/3 comparisons?

    - by Jackson Tale
    In Algorithm Design Manual, there is such an excise 4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using comparisons. For each comparison of two values x and y, the algorithm learns which of x < y, x = y, or x y holds. (a) Give an algorithm to sort in n - 1 comparisons in the worst case. Show that your algorithm is optimal. (b) Give an algorithm to sort in 2n/3 comparisons in the average case (assuming each of the n inputs is 0 or 1 with equal probability). Show that your algorithm is optimal. For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons. But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups? Thanks

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Largest triangle from a set of points

    - by Faken
    I have a set of random points from which i want to find the largest triangle by area who's verticies are each on one of those points. So far I have figured out that the largest triangle's verticies will only lie on the outside points of the cloud of points (or the convex hull) so i have programmed a function to do just that (using Graham scan in nlogn time). However that's where I'm stuck. The only way I can figure out how to find the largest triangle from these points is to use brute force at n^3 time which is still acceptable in an average case as the convex hull algorithm usually kicks out the vast majority of points. However in a worst case scenario where points are on a circle, this method would fail miserably. Dose anyone know an algorithm to do this more efficiently? Note: I know that CGAL has this algorithm there but they do not go into any details on how its done. I don't want to use libraries, i want to learn this and program it myself (and also allow me to tweak it to exactly the way i want it to operate, just like the graham scan in which other implementations pick up collinear points that i don't want).

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • Creating and appending a big DOM with javascript - most optimized way?

    - by fenderplayer
    I use the following code to append a big dom on a mobile browser (webkit): 1. while(someIndex--) // someIndex ranges from 10 to possibly 1000 2. { 3. var html01 = ['<div class="test">', someVal,'</div>', 4. '<div><p>', someTxt.txt1, someTxt.txt2, '</p></div>', 5. // lots of html snippets interspersed with variables 6. // on average ~40 to 50 elements in this array 7. ].join(''); 8. var fragment = document.createDocumentFragment(), 9. div = fragment.appendChild(document.createElement('div')); 10. div.appendChild(jQuery(html01)[0]); 11. jQuery('#screen1').append(fragment); 12. } //end while loop 13. // similarly i create 'html02' till 'html15' to append in other screen divs Is there a better or faster way to do the above? Do you see any problems with the code? I am a little worried about line 10 where i wrap in jquery and then take it out.

    Read the article

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • Undesired Output of Crontab Job Using CURL

    - by Russell C.
    I have written a perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress? Thanks in advance for your help!

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >