Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 371/837 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Video chat application : Which technology to choose ?

    - by WarDoGG
    I have to undertake a project which is to make a video chat application. The video has to be streamed from one location and can be viewed by multiple people spread out over the globe. Performance is really an issue and a delay of more than 2-3 seconds is unacceptable. From what i gather, this can be done in Flex and also in JAVA. Any performance issues and caveats with a particular approach ? I would really like the pros to comment on this and guide me through. Will be very very helpful. Are there any open source libraries available for video recording in flash / JAVA which i can integrate into my app and customize according to my needs ?

    Read the article

  • Controlling access to large files in Apache

    - by obeattie
    Hi there, I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL). Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?" If this makes any sense, help would be much appreciated :) P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • What is the best solution to replace a new memory allocator in an existing code?

    - by O. Askari
    During the last few days I've gained some information about memory allocators other than the standard malloc(). There are some implementations that seem to be much better than malloc() for applications with many threads. For example it seems that tcmalloc and ptmalloc have better performance. I have a C++ application that uses both malloc and new operators in many places. I thought replacing them with something like ptmalloc may improve its performance. But I wonder how does the new operator act when used in C++ application that runs on Linux? Does it use the standard behavior of malloc or something else? What is the best way to replace the new memory allocator with the old one in the code? Is there any way to override the behavior or new and malloc or do I need to replace all the calls to them one by one?

    Read the article

  • GetDiskFreeSpaceEx in winCE 5.0 emulator?

    - by vidhyarthi
    Hi, I am trying to use GetDiskFreeSpaceEx in wince5.0 emulator. This is the following code I have written. ULARGE_INTEGER notused, totalBytes, freeBytes; GetDiskFreeSpaceEx(_T("\\Windows"),&notused,&totalBytes,&freeBytes); printf(" Error in disk %d ", GetLastError()); printf(" values = notused %d,totalBytes %d,freeBytes %d",notused,totalBytes,freeBytes); *Output * 14540 PID:3db620e TID:3e5c83e Error in disk 0 14540 PID:3db620e TID:3e5c83e values = notused 25987296,totalBytes 0,freeBytes 26234880 The total bytes that I get is zero. Am I missing something or in emulator is that OK?

    Read the article

  • Checking exception before using GETITEMBYID()

    - by ps123
    Hello, I am getting item by getiembyid...but I want to check before using it that whether item exist or not...I don't want to use query as main purpose of using Getitembyid is performance.....any idea how to achieve this... itemid = Response.QueryString["loc"]; SPList mylist = myweb.GetList(SPUrlUtility.CombineUrl(myweb.ServerRelativeUrl, "/Lists/Location")); //now id itemid does not exist it throws exception...so i want to check before using following statement that itemid exist...I know i can check throw SPQuery but as i said above because of performance issue only i m using itemid.... SPListItem myitem = mylist.GetItemById(Convert.ToInt32(itemid)); Any idea how to achieve this?

    Read the article

  • Marvell C++ compiler (for Windows CE) for building Qt

    - by vnm
    Hello, I was able to build Qt 4.5 for Windows CE (ARM4VI) using MS VisualC++ compiler. Now, I am trying to compile Qt 4.5 for Windows CE 5.0 using Marvel C++ compiler v 2.2 (former Intel C++ compiler for XScale architecture) for achieving some performance benefit. It seems, that this compiler not officially supported for building Qt by trolltech (no appropriate mkspec in mkspec's folder of Qt folder). So, my questions: Is it worth to trying build Qt by this compiler for achieving performance enhancement ? Is there any way for building Qt using Marvell C++ compiler (by creating my own mkspec or something like that)?

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • mysql twitter/facebook like status feed

    - by barjonah
    Hi, I have two tables. One named status like this... user_id | status --------+----------- 1 | random status from user 1 2 | random status from user 2 3 | random message from user 3 4 | staus from user 4 1 | second status for user1 etc... and another named users_following like this... user_id | is_following --------+----------- 1 | 2 1 | 3 2 | 1 3 | 2 meaning that user 1 is following both users 2 and 3 etc... So, let's say I chose user 1. What is the best query (performance wise) to show the status updates of users that user 1 is following, in this case users 2 and 3 currently I have something like SELECT * from status WHERE user_id IN(SELECT is_following FROM users_following WHERE user_id='1') LIMIT 0,5 but I don't think this is good for performance if a user was following thousands+ of users

    Read the article

  • .resx file data inaccessible in Visual C#

    - by dsp_099
    What I'm trying to do: include some files along with the executable to extract them later. I have two projects, both with a Resource1.resx file (and some resources included from disk). In one project, I can use File.WriteAllBytes(path, Resource1.Image); to dump the resource to disk. In another, Resource1 is does not exist. I've done this before but all I can find is information about localizations (?) when I search MSDN for how to work with Resources.

    Read the article

  • What is the cost of memory access?

    - by Jurily
    We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true. Consider the following C code: int i = 34; int *p = &i; // do something that may or may not involve i and p {...} // 3 days later: *p = 643; What is the estimated cost of this last assignment in CPU instructions, if i is in L1 cache, i is in L2 cache, i is in L3 cache, i is in RAM proper, i is paged out to an SSD disk, i is paged out to a traditional disk? Where else can i be? Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.

    Read the article

  • Shared data in an ASP.NET application

    - by Barguast
    I have a basic ASP.NET application which is used to request data which is stored on disk. This is loaded from files and sent as the response. I want to be able to store the data loaded from these files in memory to reduce the number of reads from disk. All of the requests will be asking for the same data, so it makes sense to have a single cache of in-memory data which is accessible to all requests. What is the best way to create a single accessible object instance which I can use to store and access this cached data? I've looked into HttpApplication, but apparently a new instance of this is created for parallel requests and so it doesn't fit my needs.

    Read the article

  • Why does C++ linking use virtually no CPU? (updated)

    - by John
    On a native C++ project, linking right now can take a minute or two, yet during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity? If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4Gb of RAM I should be able to save a lot of disk access and make it CPU-bound again, no? update: so the obvious follow-up is, can VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does?

    Read the article

  • Sql Server as logging, best connection practise

    - by ozz
    I'm using SqlServer as logging. Yes this is wrong decision, there are better dbs for this requirement. But I have no other option for now. Logging interval is 3 logs per second. So I've static Logger class and it has static Log method. Using "Open Connection" as static member is better for performance. But what is the best implemantation of it? This is not that I know. public static class OzzLogger { static SqlConnection Con; static OzzLogger() { Con=ne SqlConnection(....); Con.Open(); } public static void Log(....) { Con.ExecuteSql(......); } } UPDATE I asked because of my old information. People say "connection pooling performance is enough". If there is no objection I'm closing the issue :)

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >