Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 328/527 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • What design pattern should be used to create an emulator?

    - by Facon
    I have programmed an emulator, but I have some doubts about how to organizate it properly, because, I see that it has some problems about classes connection (CPU <- Machine Board). For example: I/O ports, interruptions, communication between two or more CPU, etc. I need for the emulator to has the best performance and good understanding of the code. PD: Sorry for my bad English.

    Read the article

  • Using XmlSerializers.dll

    - by Erup
    I know the .XmlSerializers.dll generated, is usefull to improve the startup performance of a XmlSerializer when it serializes or deserializes objects. But how clients can use this assembly?

    Read the article

  • What was the most refreshingly honest non-technical comment you saw in the code?

    - by DVK
    OK, so we all saw the lists of "funny" or "bad" comments. However, today, when maintaining an old stored proc, I stumbled upon a comment which I couldn't classify other than "refreshingly brutally honest", left by a previous maintainer around a really freakish (both performance and readability-wise) page-long query: -- Feel free to optimize this if you can understand what it means So, in the first (and hopefully only) poll type question in my history of Stack Overflow, I'd like to hear some other "refreshingly brutally honest" code comments you encountered or written.

    Read the article

  • '/usr/lib/mozilla/plugins/npPluginTest.so' is not an ELF executable for sh

    - by rakesh nair
    I have created NPAPI plugin, which is workig fine on linux where I have created the .so file but when I deployed this plugin on our production device where we have linux environment with limited resources(due to performance constraints) , following error is thrown '/usr/lib/mozilla/plugins/npPluginTest.so' is not an ELF executable for sh FYI:so file created on 32bit linux box. how can I resolve this issue?

    Read the article

  • Checking an empty Core Data relationship (SQLite)

    - by rwat
    I have a to-many relationship in my data model, and I'd like to get all the objects that have no corresponding objects in the relationship. For example: Customer - Purchases I want to get all Customers that have 0 Purchases. I've read somewhere that I could use "Purchases[SIZE] = 0", but this gives me an unsupported function expression error, which I think means it doesn't work with a SQLite backing store (which I don't want to switch from, due to some performance constraints). Any ideas?

    Read the article

  • Any big difference between using contains or loop through a list?

    - by Nazgulled
    Hi, Performance wise, is there really a big difference between using: ArrayList.contains(o) vs foreach|iterator LinkedList.contains(o) vs foreach|iterator HashMap.(containsKey|containsValue) vs foreach|iterator TreeMap.(containsKey|containsValue) vs foreach|iterator Of course, for the foreach|iterator loops, I'll have to explicitly compare the methods and return true or false accordingly. The object I'm comparing is an object where equals() and hashcode() are both properly overridden.

    Read the article

  • Pointcut matching methods with annotated parameters

    - by Sinuhe
    I need to create an aspect with a pointcut matching a method if: - Is public - Its class is annotated with @Controller - One of its parameters (can have many) is annotated with @MyParamAnnotation. I think the first two conditions are easy, but I don't know if its possible to accomplish the third with Spring. If it is not, maybe I can change it into: - One of its parameters is an instance of type com.me.MyType (or implements some interface) Do you think it's possible to achieve this? And will performance be good? Thanks

    Read the article

  • Is there a suitable replacement for C++, when I would like to write video processing applications?

    - by Nisanio
    Hi I want to write a video editing software, and the "logical" conclusion is that the language I must to use is C++... But I don't like it (sorry c++ fans) I would like to write it with something cool, like Lisp or Haskell or Erlang... But I don't know if the open source implementation of those languages (I don't have money to buy licenses) let me made a competitive software (in the performance area) What do you think? what do you recommend?

    Read the article

  • Char and Chr in Delphi

    - by JamesB
    The difference between Chr and Char when used in converting types is that one is a function and the other is cast So: Char(66) = Chr(66) I don't think there is any performance difference (at least I've never noticed any, one probably calls the other).... I'm fairly sure someone will correct me on this! Which do you use in your code and why?

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • VB.Net Linq Datatable Exists

    - by LarsH
    I would like to use Linq instead of below function : Friend Function IsCollectionInTable2(ByVal apps As DataTable, ByVal collectionId As String) As Boolean For Each row As DataRow In apps.Rows If row("CollectionId").ToString = collectionId Then Return True Next Return False End Function The best I can do is below: Friend Function IsCollectionInTable(ByVal apps As DataTable, ByVal collectionId As String) As Boolean Return (From row In apps.AsEnumerable() Where (row.Field(Of String)("CollectionId") = collectionId) Select row.Field(Of String)("CollectionId")).Count > 0 End Function I would like to use Exists or Any in above function. Performance could be an issue,

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Difference between DirectCast() and CType() in VB.Net

    - by Chapso
    I am an experienced C/C++/C# programmer who has just gotten into VB.NET. I generally use CType (and CInt, CBool, CStr) for casts because it is less characters and was the first way of casting which I was exposed to, but I am aware of DirectCast and TryCast as well. Simply, are there any differences (effect of cast, performance, etc.) between DirectCast and CType? I understand the idea of TryCast.

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • How does JSON compare to XML in terms of file size and serialisation/deserialisation time?

    - by nbolton
    I have an application that performs a little slow over the internet due to bandwidth reasons. I have enabled GZip which has improved download time by a significant amout, but I was also considering whether or not I could switch from XML to JSON in order to squeeze out that last bit of performance. Would using JSON make the message size significantly smaller, or just somewhat smaller? Let's say we're talking about 250kB of XML data (which compresses to 30kB).

    Read the article

  • Application Engineering and Number of Users

    - by Kramii
    Apart from performance concerns, should web-based applications be built differently according to the number of (concurrent) users? If so, what are the main differences for (say) 4, 40, 400 and 4000 users? I'm particularly interested in how logging, error handling, design patterns etc. would be be used according to the number of concurrent users.

    Read the article

  • block write access to table from an application in mysql

    - by hoberion
    Hello, We have a CMS plugin that writes statistics to 1 table, this creates performance issues on the entire platform. We decided to use another statistics plugin which can connect to a different database server (the first plugin couldn't!) however we need parts of the first plugin. I want to lock the statistics table to prevent misusage (not allowed to drop it by the developer) So I was wondering if a lock table could do this or if I can implement some sort of read only table

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >