Search Results

Search found 13341 results on 534 pages for 'obiee performance tuning'.

Page 243/534 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Is a program compiled with -g gcc flag slower than the same program compiled without -g?

    - by e271p314
    I'm compiling a program with -O3 for performance and -g for debug symbols (in case of crash I can use the core dump). One thing bothers me a lot, does the -g option results in a performance penalty? When I look on the output of the compilation with and without -g, I see that the output without -g is 80% smaller than the output of the compilation with -g. If the extra space goes for the debug symbols, I don't care about it (I guess) since this part is not used during runtime. But if for each instruction in the compilation output without -g I need to do 4 more instructions in the compilation output with -g than I certainly prefer to stop using -g option even at the cost of not being able to process core dumps. How to know the size of the debug symbols section inside the program and in general does compilation with -g creates a program which runs slower than the same code compiled without -g?

    Read the article

  • Real pagination vs Next and Previous buttons

    - by Pablo
    By real pagination i mean something like this when in page 3: <<Previous 1 | 2 | {3} | 4 | 5 |...| 15 | Next>> By Next and Previous buttons i mean something like this when in page 3: <<previous Next>> Performance wise im sure the Previous and Next Buttons are better since unlike the real pagination it doesn't require over-querying the database. By over-querying the database i mean getting more information from the database than what you will need to display on the page. My theory is that the Previous and Next Buttons can drastically increase a site performance as it only requires the exact information you will need to display on a page, please correct me if im wrong on this. so, do users really have preference when it comes to this two options? is it just a Developer preference and its convenience? Which one do you prefer? why? *Note: Previous and Next Buttons are usually labeled Newer and older.

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • iphone app photo upload to server from app threads

    - by user290380
    I have an app that needs to upload a least 5 photos to a server using API call available with the server. For that I am planning to use threads which will take care of photo upload and the main process can go on with the navigation of views etc. What I cant decide is whether it is OK to spawn five separate threads in iphone or use a single thread that will do the upload. In the later cases obviously it will become quite slow. Basically an HTTP POST request will be made to the server with the NSMutableURLRequest object using NSCOnnection. More threads mean more complexity and sync issues, but I can try to write code as neat as possible if it means better performance than a single thread which is simple but is a real stopper if performance is considered. Anybody with any experience in this kinda app. ??

    Read the article

  • Single page app with high number of images working extremely slow on iOS8 safari/Webview

    - by NikhilWanpal
    We are working on a WebView (not WKWebView, yet) app, are are observing that the app runs extremely slow on iOS 8. The same app runs smooth on lower versions of OS like iOS7 and iOS6. So we tried it in safari on iOS8 and the performance is similar to iOS6 and 7. The app is filled with images and many are high resolution. While trying to trace the issue (trial and error!) we reduced the sizes and resolutions of the images and the performance improved, but it is still not at par with versions 6 and 7. We are unable to find any such issues reported elsewhere and are stuck. It would be great if we could get some pointers on this one.

    Read the article

  • Archiving Database Tables using Java

    - by HonorGod
    My application demands archiving database tables between sybase and db2 and vice-a-versa and within(db2 to db2 and sybase to sybase) using java. I am trying to understand the best strategies around in terms performance, implementation, ease of use and scalability. Here is my current process - source and destination tables with the acceptable parameters (from java) are defined within xml. the application reads the source and destination configurations and execute them sequentially. destination is sometime optional when source is just deleting data from a specific table or when the source is just calling a stored procedure. dataset between source and destination is extremely large (in millions) From top of my head, it looks like I can define dependencies between multiple source and destination combination and have them execute in parallel in multiple treads. But will this improve any performance(i hope it will)? Are there any open-source frameworks for data archiving using java? Any other thoughts on the implements side will be really helpful. Thanks

    Read the article

  • How have your coding values changed since graduating?

    - by Matt
    We all walked out of school with the stars in our eyes and little experience in "real-world" programming. How have your opinions on programming as a craft changed since you've gained more experience away from academia? I've become more and more about design a la McConnell : wide use of encapsulation, quality code that gives you warm fuzzy feelings when you read it, maintainability over execution performance, etc..., whereas many of my co-workers have followed a different path of fewer middlemen layers getting in the way, code that is right out in the open and easier to locate, even if harder to read, and performance-centric designs. What have you learned about the craft of software design which has changed the way you approach coding since leaving the academic world?

    Read the article

  • MS Access vs SQL Server and others ? Is it worth taking a db server when less than 2 Gb and only 20

    - by asksuperuser
    After my experiment with MSAccess vs MySQL which shows MS Access hugely overperforming Mysql odbc insert by a factor 1000% before I would do the same experiment with SQL Server I searched for some other's people and found this one: http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/ which says "As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth." So is worth bother with some db server when data is less than 2 Gb and users are about 20 (knowing that MS Access theorically supports up to 255 concurrent users though practically it's around a dozen concurrent users only). Are there any real world studies that really compare MS Access with other db in these specific use case ? Because professionaly speaking I keep hearing people systematically recommend DB server from people who have never used Access just because they think DB Server can only perform better in every case which I used to think myself I confess.

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • Best way to merge two identical ASPNET web sites?

    - by ase69s
    We have two websites which only diference is in the design (Diferent images, styles, layouts..etc) but the web structure of files and cs code is the same so we want to simplify its manteinance... The actual structure would be: DefaultA.aspx DefaultA.aspx.cs DefaultB.aspx DefaultB.aspx.cs LoginA.aspx LoginA.aspx.cs LoginB.aspx LoginB.aspx.cs One idea would be changing the design differencies at runtime depending of the origin website, but we dont like much this because performance, abstraction in designing them and url confusion... Another one is sharing the cs (both aspx inheriting and using the same cs) file but we never have done or seen it done in any website before so we wonder if its a good aproach... What do you think? Any other way better in terms of performance vs development-ease?

    Read the article

  • how to get apache log like debugging in Android logcat

    - by Nav
    I am currently using a webview to load a webpage which contains lots of javascript and having lots of trouble debugging what exactly gets loaded and when in the webview . Then I saw this post where the op seems to be using apache log to monitor webpage load events in his webview. Enhance webView performance (should be the same performance as native Web Browser) Can I get a similar utility plugin or anything so that I can use it with logcat in ddms view. If possible please provide some resource as to how to configure it for android.

    Read the article

  • Do MySQL Locked Tables affect related Views?

    - by CogitoErgoSum
    So after reading http://stackoverflow.com/questions/1415602/performance-in-pdo-php-mysql-transaction-versus-direct-execution in regards to performance issues I was thinking about I did some research on locking tables in MySQL. On http://dev.mysql.com/doc/refman/5.0/en/table-locking.html Table locking enables many sessions to read from a table at the same time, but if a session wants to write to a table, it must first get exclusive access. During the update, all other sessions that want to access this particular table must wait until the update is done. This part struck me particularly becuase most of our queries will be updates rather than inserts. I was wondering if one created a table called foo on which all updates/inserts were carried out and then a view called foo_view (A copy of foo, or perhaps foo and a linkage of several other tables plus foo) on which all selects occured, would this locking issue still occur? That is, would SELECT quries on foo_view still have to wait for an update to finish on foo?

    Read the article

  • How to change size of STL container in C++

    - by Jaime Pardos
    I have a piece of performance critical code written with pointers and dynamic memory. I would like to rewrite it with STL containers, but I'm a bit concerned with performance. Is there a way to increase the size of a container without initializing the data? For example, instead of doing ptr = new BYTE[x]; I want to do something like vec.insert(vec.begin(), x, 0); However this initializes every byte to 0. Isn't there a way to just make the vector grow? I know about reserve() but it just allocates memory, it doesn't change the size of the vector, and doesn't allows me to access it until I have inserted valid data. Thank you everyone.

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Using Lambda Statements for Event Handlers

    - by lush
    I currently have a page which is declared as follows: public partial class MyPage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //snip MyButton.Click += (o, i) => { //snip } } } I've only recently moved to .NET 3.5 from 1.1, so I'm used to writing event handlers outside of the Page_Load. My question is; are there any performance drawbacks or pitfalls I should watch out for when using the lambda method for this? I prefer it, as it's certainly more concise, but I do not want to sacrifice performance to use it. Thanks.

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • When accessing ResultSets in JDBC, is there an elegant way to distinguish between nulls and actual z

    - by Uri
    When using JDBC and accessing primitive types via a result set, is there a more elegant way to deal with the null/0 than the following: int myInt = rs.getInt(columnNumber) if(rs.wasNull())? { // Treat as null } else { // Treat as 0 } I personally cringe whenever I see this sort of code. I fail to see why ResultSet was not defined to return the boxed integer types (except, perhaps, performance) or at least provide both. Bonus points if anyone can convince me that the current API design is great :) My solution was to write a wrapper that returns an Integer (I care more about elegance of client code than performance), but I'm wondering if I'm missing a better way to do this.

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • What is the best solution to replace a new memory allocator in an existing code?

    - by O. Askari
    During the last few days I've gained some information about memory allocators other than the standard malloc(). There are some implementations that seem to be much better than malloc() for applications with many threads. For example it seems that tcmalloc and ptmalloc have better performance. I have a C++ application that uses both malloc and new operators in many places. I thought replacing them with something like ptmalloc may improve its performance. But I wonder how does the new operator act when used in C++ application that runs on Linux? Does it use the standard behavior of malloc or something else? What is the best way to replace the new memory allocator with the old one in the code? Is there any way to override the behavior or new and malloc or do I need to replace all the calls to them one by one?

    Read the article

  • Video chat application : Which technology to choose ?

    - by WarDoGG
    I have to undertake a project which is to make a video chat application. The video has to be streamed from one location and can be viewed by multiple people spread out over the globe. Performance is really an issue and a delay of more than 2-3 seconds is unacceptable. From what i gather, this can be done in Flex and also in JAVA. Any performance issues and caveats with a particular approach ? I would really like the pros to comment on this and guide me through. Will be very very helpful. Are there any open source libraries available for video recording in flash / JAVA which i can integrate into my app and customize according to my needs ?

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • Checking exception before using GETITEMBYID()

    - by ps123
    Hello, I am getting item by getiembyid...but I want to check before using it that whether item exist or not...I don't want to use query as main purpose of using Getitembyid is performance.....any idea how to achieve this... itemid = Response.QueryString["loc"]; SPList mylist = myweb.GetList(SPUrlUtility.CombineUrl(myweb.ServerRelativeUrl, "/Lists/Location")); //now id itemid does not exist it throws exception...so i want to check before using following statement that itemid exist...I know i can check throw SPQuery but as i said above because of performance issue only i m using itemid.... SPListItem myitem = mylist.GetItemById(Convert.ToInt32(itemid)); Any idea how to achieve this?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >