Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 336/646 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Is AMQP suitable as both an intra and inter-machine software bus?

    - by Bwooce
    I'm trying to get my head around AMQP. It looks great for inter-machine (cluster, LAN, WAN) communication between applications but I'm not sure if it is suitable (in architectural, and current implementation terms) for use as a software bus within one machine. Would it be worth pulling out a current high performance message passing framework to replace it with AMQP, or is this falling into the same trap as RPC by blurring the distinction between local and non-local communication? I'm also wary of the performance impacts of using a WAN technology for intra-machine communications, although this may be more of an implementation concern than architecture. War stories would be appreciated.

    Read the article

  • Is there a way that I can hard code a const XmlNameTable to be reused by all of my XmlTextReader(s)?

    - by highone
    Before I continue I would just like to say I know that "Premature optimization is the root of all evil." However this program is only a hobby project and I enjoy trying to find ways to optimize it. That being said, I was reading an article on improving xml performance and it recommended sharing "the XmlNameTable class that is used to store element and attribute names across multiple XML documents of the same type to improve performance." I wasn't able to find any information about doing this in my googling, so it is likely that this is either not possible, a no-no, or a stupid question, but what's the harm in asking?

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How do we achieve "substring-match" under O(n) time?

    - by Pacerier
    I have an assignment that requires reading a huge file of random inputs, for example: Adana Izmir Adnan Menderes Apt Addis Ababa Aden ADIYAMAN ALDAN Amman Marka Intl Airport Adak Island Adelaide Airport ANURADHAPURA Kodiak Apt DALLAS/ADDISON Ardabil ANDREWS AFB etc.. If I specify a search term, the program is supposed to find the lines whereby a substring occurs. For example, if the search term is "uradha", the program is supposed to show ANURADHAPURA. If the search term is "airport", the program is supposed to show Amman Marka Intl Airport, Adelaide Airport A quote from the assignment specs: "You are to program this application taking efficiency into account as though large amounts of data and processing is involved.." I could easily achieve this functionality using a loop but the performance would be O(n). I was thinking of using a trie but it seems to only work if the substring starts from index 0. I was wondering what solutions are there which gives a performance better than O(n)?

    Read the article

  • CgiModule and FastCgiModule in IIS7

    - by hari
    My web server is IIS7 running on Windows 2008 Web edition. There are nearly 40 modules when checked pre-installed "Modules". It also having "CgiModule and FastCgiModules". All the websites installed on this server purely runs with ASP.NET technology. Can I remove these two modules to improve performance? Same way, my application uses "Forms Authentication" only. In such case can I delete "Windows Authentication and WindowsAuthenticationModule"?. Also please suggest if any other modules can be deleted to improve performance.

    Read the article

  • from ggplot2 to OOo workflow?

    - by Andreas
    This is not really a programming question, but I try here none the less. I once used latex for my reports. But the people I work with needs to make small edits and do not have latex skillz. Openoffice is then the way to go. But saving ggplot images with dpi 100 makes for really ugly graphs. dpi = 600 is a no go (e.g. huge legend). So what to do? I currently save (still via ggsave) to eps - which openoffice can import. But performance is not good at all. Googling I found a bug for the poor eps performance in OOo, and also talk about a non-implemented svg feature. But none helps me right now. If you work with ggplot2 and OOo - What do you do? I have been unsuccesfull with pdf conversion for some reason.

    Read the article

  • How to reduce java concurrent mode failure and excessive gc

    - by jimx
    In Java, the concurrent mode failure means that the concurrent collector failed to free up enough memory space form tenured and permanent gen and has to give up and let the full stop-the-world gc kicks in. The end result could be very expensive. I understand this concept but never had a good comprehensive understanding of A) what could cause a concurrent mode failure and B) what's the solution?. This sort of unclearness leads me to write/debug code without much of hints in mind and often has to shop around those performance flags from Foo to Bar without particular reasons, just have to try. I'd like to learn from developers here how your experience is. If you had previous encountered such performance issue, what was the cause and how you addressed it? If you have coding recommendations, please don't be too general. Thanks!

    Read the article

  • Is there a difference between Perl's shift versus assignment from @_ for subroutine parameters?

    - by cowgod
    Let us ignore for a moment Damian Conway's best practice of no more than three positional parameters for any given subroutine. Is there any difference between the two examples below in regards to performance or functionality? Using shift: sub do_something_fantastical { my $foo = shift; my $bar = shift; my $baz = shift; my $qux = shift; my $quux = shift; my $corge = shift; } Using @_: sub do_something_fantastical { my ($foo, $bar, $baz, $qux, $quux, $corge) = @_; } Provided that both examples are the same in terms of performance and functionality, what do people think about one format over the other? Obviously the example using @_ is fewer lines of code, but isn't it more legible to use shift as shown in the other example? Opinions with good reasoning are welcome.

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • Compression Array of Bytes

    - by Pedro Magalhaes
    Hi, My problem is: I want to store a array of bytes in compressed file, and then I want to read it with a good performance. So I create a array of bytes then pass to a ZLIB algorithm then store it in the file. For my surprise the algorithm doesn't work well., probably because the array is a random sample. Using this approach, it will will be ber easy to read. Just copy the stream to memory, decompress them and copy it to a array of bytes. But i need to compress the file. Do I have to use a algorithm, like RLE, for compresse the byte array? I think that I can store the byte array like a string and then compress it. But i think I am going to have a poor performance on reading data. Sorry for my poor english. Thanks

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • Extract a sentence out of sentences separated by delimitors

    - by Laura
    Below is a sample line I have extracted from a website: below a satisfactory level; &quot;an off year for tennis&quot;; &quot;his performance was off&quot; The output displays as: below a satisfactory level; "an off year for tennis"; "his performance was off" I want to get only the first sentence "below a satisfactory level"; Here is the code I have tried after exploring many stackoverflow posts: $data=explode('; ',$str); echo $data[0]; But somehow it is not working. Thanks in advance.

    Read the article

  • What does "Application Server" in Windows Server 2008 mean?

    - by In Sane
    When seeing the Windows Server 2008 R2 edition comparision by Role, i noticed that there is an entry for Application Server separate from that of IIS. http://www.microsoft.com/windowsserver2008/en/us/r2-compare-roles.aspx What is confusing me is that for Web edition, "Application Server" is not ticked but IIS is ticked. Isnt IIS both the web server and the application server in Windows? And if so, if i take the web edition, can i not host my business components (WCF services)on it because it is not an "Application Server" ?

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Remove items from SWT tables

    - by Dima
    This is more of an answer I'd like to share for the problem I was chasing for some time in RCP application using large SWT tables. The problem is the performance of SWT Table.remove(int start, int end) method. It gives really bad performance - about 50msec per 100 items on my Windows XP. But the real show stopper was on Vista and Windows 7, where deleting 100 items would take up to 5 seconds! Looking into the source code of the Table shows that there are huge amount of windowing events flying around in this call.. That brings the windowing system to its knees. The solution was to hide the damn thing during this call: table.setVisible(false); table.remove(from, to); table.setVisible(true); That does wonders - deleting 500 items on both XP & Windows7 takes ~15msec, which is just an overhead for printing out time stamps I used. nice :)

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Is there a module that implements an efficient array type in Erlang?

    - by dsmith
    I have been looking for an array type with the following characteristics in Erlang. append(vector(), term()) O(1) nth(Idx, vector()) O(1) set(Idx, vector(), term()) O(1) insert(Idx, vector(), term()) O(N) remove(Idx, vector()) O(N) I normally use a tuple for this purpose, but the performance characteristics are not what I would want for large N. My testing shows the following performance characteristics... erlang:append_element/2 O(N). erlang:setelement/3 O(N). I have started on a module based on the clojure.lang.PersistentVector implementation, but if it's already been done I won't reinvent the wheel.

    Read the article

  • I need a tutorial for IIS/ASP.Net administration

    - by TheCodeMonk
    My technical support staff is only a little familiar with IIS and that's only as far as install goes. They need to install/update and configure ASP.Net web applications and WCF services that we write. Does anyone know any good tutorials/books/web sites that can help them understand some of the basic concepts of web applications and maintaining them on IIS?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >