Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 30/586 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • Postgresql performance on EC2/EBS

    - by matija
    What gives best performance for running PostgreSQL on EC2? EBS in RAID? PGData on /mnt? Do you have any preferences or experiences? Main "plus" for running pgsql on EBS is switching from one to another instances. Can this be the reason to be slower that /mnt partition? PS. im running postgresql 8.4 with datas/size about 50G, amazon ec2 xlarge(64) instance.

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

  • Performance tune my sql query removing OR statements

    - by SmartestVEGA
    I need to performance tune following SQL query by removing "OR" statements Please help ... SELECT a.id, a.fileType, a.uploadTime, a.filename, a.userId, a.CID, ui.username, company.name companyName, a.screenName FROM TM_transactionLog a, TM_userInfo ui, TM_company company, TM_airlineCompany ac WHERE ( a.CID = 3049 ) OR ( a.CID = company.ID AND ac.SERVICECID = 3049 AND company.SPECIFICCID = ac.ID ) OR ( a.USERID = ui.ID AND ui.CID = 3049 );

    Read the article

  • References for better performance of newer JSF specifications

    - by Pentius
    Dear fellows, I'm looking for a reference to cite, which states that JSF 1.2 performs better than JSF 1.1. Or JSF 2.0 over JSF 1.2 respectively. I'm quite sure that I've read something like this before but can't find it anymore. Maybe you can help. Or is this mischief and there are no official statements regarding the performance?

    Read the article

  • OpenCV performance in different languages

    - by h0b0
    I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main part of the work should be done in native code anyway.

    Read the article

  • Retrieving all objects in code upfront for performance reasons

    - by ming yeow
    How do you folks retrieve all objects in code upfront? I figure you can increase performance if you bundle all the model calls together? This makes for a bigger deal, especially if your DB cannot keep everything in memory def hitDBSeperately { get X users ...code get Y users... code get Z users... code } Versus: def hitDBInSingleCall { get X+Y+Z users code for X code for Y... }

    Read the article

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • C++ performance, for versus while

    - by aaa
    hello. In general (or from your experience), is there difference in performance between for and while loops? What if they are doubly/triply nested? Is vectorization (SSE) affected by loop variant in g++ or Intel compilers? Thank you

    Read the article

  • C vs. C++ for performance in memory allocation

    - by Andrei
    Hi, I am planning to participate in development of a code written in C language for Monte Carlo analysis of complex problems. This codes allocates huge data arrays in memory to speed up its performance, therefore the author of the code has chosen C instead of C++ claiming that one can make faster and more reliable (concerning memory leaks) code with C. Do you agree with that? What would be your choice, if you need to store 4-16 Gb of data arrays in memory during calculation?

    Read the article

  • Tracking down data load performance issues in SSIS package

    - by SteveC
    Are there any ways to determine what the differences in databases are that affect a SSIS package load performance ? I've got a package which loads and does various bits of processing on ~100k records on my laptop database in about 5 minutes Try the same package and same data on the test server, which is a reasonable box in both CPU and memory, and it's still running ... about 1 hour so far :-( Checked the package with a small set of data, and it ran through Ok

    Read the article

  • Does variable name length matter for performance C#?

    - by MadBoy
    I've been wondering if using long descriptive variable names in WinForms C# matters for performance? I'm asking this question since in AutoIt v3 (interpreted language) it was brought up that having variables with short names like aa instead of veryLongVariableName is much much faster (when program is bigger then 5 liner). I'm wondering if it's the same in C#?

    Read the article

  • Symmetric Encryption: Performance Questions

    - by cam
    Does the performance of a symmetric encryption algorithm depend on the amount of data being encrypted? Suppose I have about 1000 bytes I need to send over the network rapidly, is it better to encrypt 50 bytes of data 20 times, or 1000 bytes at once? Which will be faster? Does it depend on the algorithm used? If so, what's the highest performing, most secure algorithm for amounts of data under 512 bytes?

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >