Search Results

Search found 2042 results on 82 pages for 'average'.

Page 69/82 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Fasted way to develop data entry screens for a .NET backend ?

    - by jay23
    I am a .NET / C# back end guy. I am working on a app that will have about 200 different data entry screens. For me exposing DTO as a collection for CRUD (IUpdatable and IQueryable) is the easy part, can do it in sleep :-). What I am trying to decide is what type of front end technology will allow me to develop these data entry screens fast. They don't have to be fancy but they are not just plain grid either and on average they have about 15 form fields and some client side data validation (no db look up) Options I am looking at are Use ExtJS on the front and REST / JSON on the back. ASP.NET RIA but I do not know SL (Well XAML) Plain ASP.NET / MVC One idea I had was the DTO will contain the meta data about the form (As Attributes) and the form can be dynamically generated, but i do not want to reinvent the wheel if their is an easy way. I have looked at RAD software but all of them look at the DB and generate screens. I rather want some thing that can look at my DTO and generate screens. Jay

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Whether to put method code in a VB.Net data storage class, or put it in a separate class?

    - by Alan K
    TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing? ........ I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places. In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that. As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy. Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are: Is the code loaded into memory with each instance of the class that's created? Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.) If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods? Any thoughts appreciated.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Implementing a Clock Drift random number generator in PHP

    - by Excl
    Hi, I wrote the following code: function create_rand($time) { $counter = 0; $stop = microtime(true) + $time; while(microtime(true) < $stop) $counter++; return $counter; } function create_rand_max($time, $a, $b = false) { $rand_a = create_rand($time); $rand_b = create_rand($time); $rand_c = create_rand($time); $max = max($rand_a, $rand_b, $rand_c); $min = min($rand_a, $rand_b, $rand_c); if($max === $min) return create_rand_max($time, $a, $b); $middle = $rand_a + $rand_b + $rand_c - $max - $min; if($b !== false) return min($a, $b) + ($middle - $min) * (max($a, $b) - min($a, $b)) / ($max - $min); return ($middle - $min) * $a / ($max - $min); } $stop = 1000; $sum = 0; for($i = 0; $i < $stop; $i++) { $sum += create_rand_max(0.001, 100, 200); } echo $sum / $stop; The average is usually between 157 to 161, instead of ~150. Any ideas?

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • A specific data structure

    - by user550413
    Well, this question is a bit specific but I think there is some general idea in it that I can't get it. Lets say I got K servers (which is a constant that I know its size). I have a program that get requests and every request has an id and server id that will handle it. I have n requests - unknown size and can be any number. I need a data structure to support the next operations within the given complexity: GetServer - the function gets the request ID and returns the server id that is supposed to handle this request at the current situation and not necessarily the original server (see below). Complexity: O(log K) at average. KillServer - the function gets as input a server id that should be removed and another server id that all the requests of the removed server should be passed to. Complexity: O(1) at the worst case. -- Place complexity for all the structure is O(K+n) -- The KillServer function made me think using a Union-Find as I can do the union in O(1) as requested but my problem is the first operation. Why it's LogK? Actually, no matter how I "save" the requests if I want to access to any request (lets say it's an AVL tree) so the complexity will be O(log n) at the worst case and said that I can't assume Kn (and probably K Tried thinking about it a couple of hours but I can't find any solution. Known structures that can be used are: B+ tree, AVL tree, skip list, hash table, Union-Find, rank tree and of course all the basics like arrays and such.

    Read the article

  • new to lists on python

    - by user1762229
    This is my current code: while True: try: mylist = [0] * 7 for x in range(7): sales = float(input("Sales for day:")) mylist[x] = sales if sales < 0: print ("Sorry,invalid. Try again.") except: print ("Sorry, invalid. Try again.") else: break print (mylist) best = max(sales) worst = min(sales) print ("Your best day had", best, "in sales.") print ("Your worst day had", worst, "in sales.") When I run it I get this: Sales for day:-5 Sorry,invalid. Try again. Sales for day:-6 Sorry,invalid. Try again. Sales for day:-7 Sorry,invalid. Try again. Sales for day:-8 Sorry,invalid. Try again. Sales for day:-9 Sorry,invalid. Try again. Sales for day:-2 Sorry,invalid. Try again. Sales for day:-5 Sorry,invalid. Try again. [-5.0, -6.0, -7.0, -8.0, -9.0, -2.0, -5.0] Traceback (most recent call last): File "C:/Users/Si Hong/Desktop/HuangSiHong_assign9_part.py", line 45, in <module> best = max(sales) TypeError: 'float' object is not iterable I am not quite sure how to code it so that, the lists do NOT take in negative values, because I only want values 0 or greater. I am not sure how to solve the TypeError issue so that the min and max values will print as in my code My last issue is, if I want to find the average value of the seven inputs that an user puts in, how should I go about this in pulling the values out of the lists Thank you so much

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Does anyone know of any good tutorials for using the APIs from Amazon WeB Services, namely CloudWatc

    - by undefined
    Hi, I have been wrestling with Amazon's CloudWatch API with limited success. Does anyone know of any good resources (other than amazon's api docs) for using the APIs. I have tried to run them using the PHP library for CloudWatch but get nothing but error codes. I am configuring the GetMetricStatisticsSample.php file as follows: $request = array(); $endTime = date("Y-m-d G:i:s"); $yesterday = mktime (date("H"), date("i"), date("s"), date("m"), date("d")-1, date("Y")); $startTime = date("Y-m-d 00:00:00", $yesterday); $request["Statistics.member.1"] = "Average"; $request["EndTime"] = $endTime; $request["StartTime"] = $startTime; $request["MeasureName"] = "CPUUtilization"; $request["Unit"] = "Percent"; invokeGetMetricStatistics($service, $request); But this returns "Caught Exception: Internal Error Response Status Code: 400 Error Code: Error Type: Request ID: XML:" I have also tried from command line as follows - set JAVA_HOME=C:\Program Files\Java\jre1.6.0_05 set AWS_CLOUDWATCH_HOME=C:\AmazonWebServices\API_tools\CloudWatch-1.0.0.24 set PATH=%AWS_CLOUDWATCH_HOME%\bin mon-list-metrics but get C:|Program' is not recognized as an internal or external command... any suggestions? cheers

    Read the article

  • how do I remove rows/columns from this matrix using python

    - by banditKing
    My matrix looks like this. ['Hotel', ' "excellent"', ' "very good"', ' "average"', ' "poor"', ' "terrible"', ' "cheapest"', ' "rank"', ' "total reviews"'] ['westin', ' 390', ' 291', ' 70', ' 43', ' 19', ' 215', ' 27', ' 813'] ['ramada', ' 136', ' 67', ' 53', ' 30', ' 24', ' 149', ' 49', ' 310 '] ['sutton place', '489', ' 293', ' 106', ' 39', ' 20', ' 299', ' 24', ' 947'] ['loden', ' 681', ' 134', ' 17', ' 5', ' 0', ' 199', ' 4', ' 837'] ['hampton inn downtown', ' 241', ' 166', ' 26', ' 5', ' 1', ' 159', ' 21', ' 439'] ['shangri la', ' 332', ' 45', ' 20', ' 8', ' 2', ' 325', ' 8', ' 407'] ['residence inn marriott', ' 22', ' 15', ' 5', ' 0', ' 0', ' 179', ' 35', ' 42'] ['pan pacific', ' 475', ' 262', ' 86', ' 29', ' 16', ' 249', ' 15', ' 868'] ['sheraton wall center', ' 277', ' 346', ' 150', ' 80', ' 26', ' 249', ' 45', ' 879'] ['westin bayshore', ' 390', ' 291', ' 70', ' 43', ' 19', ' 199', ' 813'] I want to remove the top row and the 0th column from this and create a new matrix. How do I do this? Normally in java or so Id use the following code: for (int y; y< matrix[x].length; y++) for(int x; x < matrix[Y].length; x++) { if(x == 0 || y == 0) { continue } else { new_matrix[x][y] = matrix[x][y]; } } Is there a way such as this in python to iterate and selectively copy elements? Thanks EDIT Im also trying to convert each matrix element from a string to a float as I iterate over the matrix. This my updated modified code based on the answer below. A = [] f = open("csv_test.csv",'rt') try: reader = csv.reader(f) for row in reader: A.append(row) finally: f.close() new_list = [row[1:] for row in A[1:]] l = np.array(new_list) l.astype(np.float32) print l However Im getting an error --> l.astype(np.float32) print l ValueError: setting an array element with a sequence.

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • MDX: Filtering a member set by a measure's table values

    - by oyvinro
    I have some numbers in a fact table, and have generated a measure which use the SUM aggregator to summarize the numbers. But the problem is that I only want to sum the numbers that are higher than, say 10. I tried using a generic expression in the measure definition, and that works of course, but the problem is that I need to be able to dynamically set that value, because it's not always 10, meaning users should be able to select it themselves. More specifically, my current MDX looks like this: WITH SET [Email Measures] AS '{[Measures].[Number Of Answered Cases], [Measures].[Max Expedition Time First In Case], [Measures].[Avg Expedition Times First In Case], [Measures].[Number Of Incoming Email Requests], [Measures].[Avg Number Of Emails In Cases], [Measures].[Avg Expedition Times Total],[Measures].[Number Of Answered Incoming Emails]}' SET [Organizations] AS '{[Organization.Id].[860]}' SET [Operators] AS '{[Operator.Id].[3379],[Operator.Id].[3181]}' SET [Email Accounts] AS '{[Email Account.Id].[6]}' MEMBER [Time.Date].[Date Period] AS Aggregate ({[Time.Date].[2008].[11].[11] :[Time.Date].[2009].[1].[2] }) MEMBER [Email.Type].[Email Types] AS Aggregate ({[Email.Type].[0]}) SELECT {[Email Measures]} ON columns, [Operators] ON rows FROM [Email_Fact] WHERE ( [Time.Date].[Date Period] ) Now, the member in question is the calculated member [Avg Expedition Times Total]. This member takes in two measures; [Sum Expedition Times] and [Nr of Expedition Times] and splits one on the other to get the average, all this presently works. However, I want [Sum Expedition Times] to only summarize values over or under a parameter of my/the user's wish. How do I filter the numbers [Sum Expedition Times] iterates through, rather than filtering on the sum that the measure gives me in the end?

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Is there a Java data structure that is effectively an ArrayList with double indicies and built-in in

    - by Bob Cross
    I am looking for a pre-built Java data structure with the following characteristics: It should look something like an ArrayList but should allow indexing via double-precision rather than integers. Note that this means that it's likely that you'll see indicies that don't line up with the original data points (i.e., asking for the value that corresponds to key "1.5"). As a consequence, the value returned will likely be interpolated. For example, if the key is 1.5, the value returned could be the average of the value at key 1.0 and the value at key 2.0. The keys will be sorted but the values are not ensured to be monotonically increasing. In fact, there's no assurance that the first derivative of the values will be continuous (making it a poor fit for certain types of splines). Freely available code only, please. For clarity, I know how to write such a thing. In fact, we already have an implementation of this and some related data structures in legacy code that I want to replace due to some performance and coding issues. What I'm trying to avoid is spending a lot of time rolling my own solution when there might already be such a thing in the JDK, Apache Commons or another standard library. Frankly, that's exactly the approach that got this legacy code into the situation that it's in right now.... Is there such a thing out there in a freely available library?

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • Database (MySQL) structuring: pros and cons of multiple tables

    - by Gideon
    I am collecting data and storing it MySQL, for: 75 variables 55 countries Each year I have, at this stage since I am building this tool created a single table, of variables / countries (storing 1 year worth of data). Next year (and for several years after that) a new set of data will be input for each country. There are therefore 3 variables in controlling data returned to a user reviewing all collected data. The general form of any query would be: Show me these specifics variables, for these specific countries, for these specific years. (Show me average age and weight, for USA and Canada, for 2012 and 2009, for example) My question is, it seems that I have two options for arranging this data: -Multiple tables where I create a table of country / variable for each year data is collected - Single table and simply add a column (field) for the year that data relates to. As far as I can tell I could make these database calls with either sructure, but is one more powerful / efficient / quicker, and why? Thanks for your consideration. It's a PDO / PHP interface if that is relevent.

    Read the article

  • MATLAB, time match filter

    - by Paul
    OK, I am still getting the hang of MATLAB. I have two files in different format. One Excel file. data1.xls, size= 86400 X 62. It looks like: Date/Time par1 par2 par3 par4 par5 par6 par6 par7 par8 par9 08/02/09 00:06:45 0 3 27 9.9 -133.2 0 0 0 1 0 Another file, data2.csv, size = 144 X 27. (If nothing is missing.) It looks like: date time P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 8/16/2009 0:00 51 45 46 54 53 52 524 5 399 89 78 Now I am using Data10minAvg = mean(reshape(Data,300,144,62)); to get the 10 min average of the first Excel file. Now I need to match up that file I am making above with the .csv file. The problem is many timestamps are missing in the .csv file. How do I make data2.csv into a file of size 144 X 27, replacing the missing datestamps by rows of zero? It will really help me than compare data1.xls file with newdata2.csv.

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • Error In VB.Net code

    - by Michael
    I am getting the error "Format Exception was unhandled at "Do While objectReader.Peek < -1 " and after. any help would be wonderful. 'Date: Class 03/20/2010 'Program Purpose: When code is executed data will be pulled from a text file 'that contains the named storms to find the average number of storms during the time 'period chosen by the user and to find the most active year. between the range of 'years 1990 and 2008 Option Strict On Public Class frmHurricane Private _intNumberOfHuricanes As Integer = 5 Public Shared _intSizeOfArray As Integer = 7 Public Shared _strHuricaneList(_intSizeOfArray) As String Private _strID(_intSizeOfArray) As String Private _decYears(_intSizeOfArray) As Decimal Private _decFinal(_intSizeOfArray) As Decimal Private _intNumber(_intSizeOfArray) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'The frmHurricane load event reads the Hurricane text file and 'fill the combotBox object with the data. 'Initialize an instance of the StreamReader Object and declare variable page 675 on the book Dim objectReader As IO.StreamReader Dim strLocationAndNameOfFile As String = "C:\huricanes.txt" Dim intCount As Integer = 0 Dim intFill As Integer Dim strFileError As String = "The file is not available. Please restart application when available" 'This is where we code the file if it exist. If IO.File.Exists(strLocationAndNameOfFile) Then objectReader = IO.File.OpenText(strLocationAndNameOfFile) 'Read the file line by line until the file is complete Do While objectReader.Peek <> -1 **_strHuricaneList(intCount) = objectReader.ReadLine() _strID(intCount) = objectReader.ReadLine() _decYears(intCount) = Convert.ToDecimal(objectReader.ReadLine()) _intNumber(intCount) = Convert.ToInt32(objectReader.ReadLine()) intCount += 1** Loop objectReader.Close() 'With any luck the data will go to the Data Box For intFill = 0 To (_strID.Length - 1) Me.cboByYear.Items.Add(_strID(intFill)) Next Else MsgBox(strFileError, , "Error") Me.Close() End If End Sub

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >