Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 495/1257 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • Which book should I choose?

    - by sebastianlarsson
    Hi guys, I'm looking for a good read on object oriented design. The two books I'm currently looking Head First Design Patterns and Head First Object object-oriented analysis & design. They seem very similar when looking at the contents and browsing through available sample text. Which one would be the best choice? About myself: I have a bachelor in computer science and I am currently studying Msc. Software Quality Engineering (read Software Engineering with focus on Quality). I am already confident in object-oriented design and have a lot of programming courses in my backpack. I have done games in c++, courses in advanced java programming (I am SCJP certified), but my preferred language is C#. I have also worked with Java for the last 7 months while studying. I am currently also studying for certificates in C# (apart from my usual studies). So I believe I have the prerequisites of actually understanding the contents of both books. Reason: I just want to be better and keep evolving as a programmer. I think it is fun. I believe Bert Bates and Kathy Sierra are involved in both these books and I have previously read their SCJP preparation book in java. I really do enjoy their style of writing. Other books which I am considering are: Clean Code: A Handbook Of Agile Software Craftsmanship Thx in advance Sebastian

    Read the article

  • Assert parameters in a table-valued UDF

    - by Clay Lenhart
    Is there a way to create "asserts" on the parameters of a table-valued UDF. I'd like to use a table-valued UDF for performance reasons, however I know that certain parameter combinations (like start and end dates that are more than a month apart) will cause performance issues on the server for all users. End users query the database via Excel using UDFs. UDFs (and table-valued UDFs in particular) are useful when the data is too large for Excel. Users write simple SQL queries that categorizes the data into groups to reduce the number of rows. For example, the user may be interested in weekly aggregates rather than hourly ones. Users write a group by SELECT statement to reduce the rows by 24x7=168 times. I know I can write RAISERROR statements in multistatement UDFs, but table-valued UDFs are integrated in the query optimizer so these queries are more efficient with table-valued UDFs. So, can I define assertions on the parameters passed to a table-valued UDF?

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Insert rownumber repeatedly in records in t-sql.

    - by jeff
    Hi, I want to insert a row number in a records like counting rows in a specific number of range. example output: RowNumber ID Name 1 20 a 2 21 b 3 22 c 1 23 d 2 24 e 3 25 f 1 26 g 2 27 h 3 28 i 1 29 j 2 30 k I rather to try using the rownumber() over (partition by order by column name) but my real records are not containing columns that will count into 1-3 rownumber. I already try to loop each of record to insert a row count 1-3 but this loop affects the performance of the query. The query will use for the RDL report, that is why as much as possible the performance of the query must be good. any suggestions are welcome. Thanks

    Read the article

  • Weaknesses of Hibernate

    - by Sinuhe
    I would like to know which are the weak points of Hibernate 3. This is not pretended to be a thread against Hibernate. I think it will be a very useful knowledge for decide if Hibernate is the best option for a project or for estimating its time. A weakness can be: A bug Where JDBC or PLSQL are better Performance issues ... Also, can be useful to know some solutions for that problems, better ORM or techniques, or it will be corrected in Hibernate 4. For example, AFAIK, Hibernate will have a very bad performance updating 10000 rows comparing to JDBC in this query: update A set state=3 where state=2

    Read the article

  • Linear feedback shift register?

    - by Mattia Gobbi
    Lately I bumped repeatedly into the concept of LFSR, that I find quite interesting because of its links with different fields and also fascinating in itself. It took me some effort to understand, the final help was this really good page, much better than the (at first) cryptic wikipedia entry. So I wanted to write some small code for a program that worked like a LFSR. To be more precise, that somehow showed how a LFSR works. Here's the cleanest thing I could come up with after some lenghtier attempts (Python): def lfsr(seed, taps): sr, xor = seed, 0 while 1: for t in taps: xor += int(sr[t-1]) if xor%2 == 0.0: xor = 0 else: xor = 1 print xor sr, xor = str(xor) + sr[:-1], 0 print sr if sr == seed: break lfsr('11001001', (8,7,6,1)) #example I named "xor" the output of the XOR function, not very correct. However, this is just meant to show how it circles through its possible states, in fact you noticed the register is represented by a string. Not much logical coherence. This can be easily turned into a nice toy you can watch for hours (at least I could :-) def lfsr(seed, taps): import time sr, xor = seed, 0 while 1: for t in taps: xor += int(sr[t-1]) if xor%2 == 0.0: xor = 0 else: xor = 1 print xor print time.sleep(0.75) sr, xor = str(xor) + sr[:-1], 0 print sr print time.sleep(0.75) Then it struck me, what use is this in writing software? I heard it can generate random numbers; is it true? how? So, it would be nice if someone could: explain how to use such a device in software development come up with some code, to support the point above or just like mine to show different ways to do it, in any language Also, as theres not much didactic stuff around about this piece of logic and digital circuitry, it would be nice if this could be a place for noobies (like me) to get a better understanding of this thing, or better, to understand what it is and how it can be useful when writing software. Should have made it a community wiki? That said, if someone feels like golfing... you're welcome.

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • Best way to encrypt certain fiels in SQL Server 2008?

    - by Josh
    I'm writing a .net web app that will read and write information to a SQL 2008 backend database. Some of this information will be highly confidential in nature so I want to encrypt certain data elements. I dont want to use TDE or any full-database encryption for performance reasons. My main concern is protecting this sensitive data as a last resort against a SQL injection or even a database server compromise. My question is what is the best way to do this to preserve performance? Is it faster to use the SQL2008 encryption functions such as EncryptByKey, or would it be faster to encrypt and decrypt the data in the .NET web app itself using a symmetric key stored in the secure web.config and store the encrypted values in the DB?

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • ASMX Still slow after 'Generate serialization assembly'

    - by Buzzer
    This question is related to: http://stackoverflow.com/questions/784918/asmx-web-service-slow-first-request. I inherited a proxy to a legacy ASMX Service. Basically as the post above states, the first call performance is literally 10 times slower than the subsequent calls. I went ahead and turned on ‘Generate serialization assembly' on the project that contains the proxy. The 'serializers' assembly is actually generated. However, I haven't seen any performance increase at all. Do I need to do anything else other than make sure the 'serializers' assembly is in the client's bin directory? Do I have to 'link' the proxy to the 'serializers' assembly during proxy generation (wsdl.exe)? I guess I'm stuck at this point. J Saunders where u at? :)

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

  • Avoid implicit conversion from date to timestamp for selects with Oracle using Hibernate

    - by sapporo
    I'm using Hibernate 3.2.7.GA criteria queries to select rows from an Oracle Enterprise Edition 10.2.0.4.0 database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle. It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance. One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries). So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing? Update: The problem occurs in a variety of configurations, but here's one specific example: Oracle Enterprise Edition 10.2.0.4.0 Oracle JDBC Driver 11.1.0.7.0 Hibernate 3.2.7.GA Hibernate's Oracle10gDialect Java 1.6.0_16

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • How to set that compiler flag?

    - by mystify
    Shark told me this: This instruction is the start of a loop that is not aligned to a 16-byte address boundary. For optimal performance, you should align the start of a hot loop using a compiler directive. With gcc 3.3 or later, use the -falign-loops=16 compiler flag. for (int i=0; i < 4; i++) { // line with the info //...code } How would I set that flag, and does it really improve performance?

    Read the article

  • Managing EntityConnection lifetime

    - by kervin
    There have been many question on managing EntityContext lifetime, e.g. http://stackoverflow.com/questions/813457/instantiating-a-context-in-linq-to-entities I've come to the conclusion that the entity context should be considered a unit-of-work and therefore not reused. Great. But while doing some research for speeding up my database access, I ran into this blog post... Improving Entity Framework Performance The post argues that EFs poor performance compared to other frameworks is often due to the EntityConnection object being created each time a new EntityContext object is needed. To test this I manually created a static EntityConnection in Global.asax.cs Application_Start(). I then converted all my context using statements to using( MyObjContext currContext = new MyObjeContext(globalStaticEFConnection) { .... } This seems to have sped things up a bit without any errors so far as far as I can tell. But is this safe? Does using a applicationwide static EntityConnection introduce race conditions? Best regards, Kervin

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • Problem detecting installed application on Win Svr 2003 x64

    - by PD
    I have an x86 Windows application that consists of a couple of services and a client ui. Due to various issues with persuading the various MSIs to upgrade properly, the installation process is now governed by a wizard-style program that detects what is currently installed and handles upgrades by storing the user's current settings, uninstalling the existing software and installing the new version(s). The basic process is: Look in HKLM\Software\Classes\Installer\Products Loop through the GUID keys therein looking for ProductName="(my app name)" If not found, repeat starting from HKCU\Software\Microsoft\Installer\Products instead If found, offer the user an upgrade (as described earlier) else a clean install (i.e. user is asked various questions by the wizard) Now, this works just fine on pretty much any Windows platform you care to mention, from XP up. It fails only on Windows Server 2003 x64, in that an existing installation is not detected by the wizard - despite the exact same registry keys being present as are on any other platform I test on. It's fine on: XP x32 Vista x32, x64 Server 2003 x86 Server 2008 x86, x64 Server 2008 R2 x64 Windows 7 x86, x64 It's only Server 2003 x64 that seems to exhibit this issue.

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • Are mathematical Algorithms protected by copyright?

    - by analogy
    I wish to implement an algorithm which i read in a journal paper in my software (commercial). I want to know if this is allowed or not. The algorithm in question is described in http://arxiv.org/abs/0709.2938 It is a very simple algorithm and a number of implementations exist in python (http://igraph.sourceforge.net/) and java. One of them is in gpl another which i got from a different researcher and had no license attached. There are significant differences in two implementations, e.g. second one uses threads and multiple cores. It is possible to rewrite/ (not translate) the algorithm. So can I use it in my software or on a server for commercial purpose. Thanks UPDATE: I am completely aware of copyright on the text of paper, it was published in phys rev E. I am concerned with use of the algorithm, in commercial software. Also the publication means that unless the patent has been already filed. The method has been disclosed publicly hence barring patent in future. Also the GPL implementation is not by authors themselves but comes from a third party. Finally i am not using the GPL implementation but creating my own using C++.

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • How to figure the read/write ratio in Sql Server?

    - by Bill Paetzke
    How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of? Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly. Why do I care? I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users. One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read uncommitted snapshot) like Jeff Atwood did with StackOverflow.

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >