Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 37/1148 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • Performance Testing a .NET Smart Client Application (.NET ClickOnce technology)

    - by jn29098
    Has anyone ever had to run performance tests on a ClickOnce application? I have engaged with a vendor who had trouble setting up their toolset with our software because it is Smart Client based. They are understandably more geared toward purely browser-based applications. I wonder if anyone has had to tackle this before and if so would you recommend any vendors who use industry standard tools such as Load Runner (which i assume can handle the smart client)? Thanks

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • Jython webapp performance

    - by DrPep
    I'm currently building a Jython web app but am concerned about Jython application performance. I take some comfort in that any compute intensive tasks I can write in a separate Java jar and invoke them from Jython. Has anyone had problems doing this, or forsee issues with such a setup?

    Read the article

  • OTRS slow performance main queue listing

    - by mrml
    we have the problem of a very slow queue listing. If there are more than 15 tickets in a single queue it takes up to 4-5 seconds for creating the view. This problems occure since we're using OTRS 3.1 We are running OTRS 3.1.4 with the KIX4OTRS extension on a virtualized Ubuntu 10.04 LTS. We tried yet: known performance tweaks provided in the manuals. creating extra database indexes installation on physical machines (no positive effects) with Ubuntu 12.04 / 12.04.1 Any ideas?

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Wpf: Performance Issue

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application.

    Read the article

  • Star Schema vs Snowflake Schema performance

    - by Megawolt
    Hi... I'm begin to developing a scial sharing website so I'm curious about database design Schema... So in Data-Mining Star-Schema is the best one but how about a social sharing website... And as a nature of the SS websites there will be (i hope :)) many users in same time... Which better for performance for overdose using...

    Read the article

  • High performance web (-services) applications

    - by User Friendly
    Hi, I'd like to become a guru in high performance web & web-services applications. What technologies/patterns/skills do you reccomend to look at? Basically, I have good skills at ASP.NET/.NET based web development, but I'd like to know how big things are built (on any platform, not depending on .net technology stack). Thank you.

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • LINQ compiled query DataBind issue

    - by Brian
    Hello All, I have a pretty extensive reporting page that uses LINQ. There is one main function that returns an IQueryable object. I then further filter / aggregate the returned query depending on the report the user needs. I changed this function to a compiled query, it worked great, and the speed increase was astonishing. The only problem comes when i want to databind the results. I am databinding to a standard asp.net GridView and it works fine, no problems. I am also databinding to an asp.net chart control, this is where my page is throwing an error. this works well: GridView gv = new GridView(); gv.DataSource = gridSource; But this does not: Series s1 = new Series("Series One"); s1.Points.DataBindXY(gridSource, "Month", gridSource, "Success"); The error i receive is this: System.NotSupportedException Specified method is not supported When i look into my gridSource var at run time i see this using a typical linq query: SELECT [t33].[value2] AS [Year], [t33].[value22] AS [Month], [t33].[value3] AS [Calls]...... I see this after i change the query to compiled: {System.Linq.OrderedEnumerable<<>f__AnonymousType15<string,int,int,int,int,int,int,int>,string>} This is obviously the reason why the databindxy is no longer working, but i am not sure how to get around it. Any help would be appreciated! Thanks

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • .Net concurrency performance on client side

    - by Yaron Naveh
    I am writing a client side .Net application which is expected to use a lot of threads. I was warned that .Net performance is very bad when it comes to concurrency. While I am not writing a real-time application, I want to make sure my application is scalable (i.e. allows many threads) and somehow comparable to an equivalent cpp application. Anyone can share his experience? Anyone can refer me to a relevant benchmark?

    Read the article

  • Ruby Performance Profiling

    - by JustSmith
    I'm developing some code that calls another function and then sends out its response. If the said function takes to long i want to record this. Are there any light weight FREE performance profiling tools for Ruby, not on rails, that can do this? I'm even open to any solution that is accurate.

    Read the article

  • Does visibility affect DOM manipulation performance?

    - by Chetan Sastry
    IE7/Windows XP I have a third party component in my page that does a lot of DOM manipulation to adjust itself each time the browser window is resized. Unfortunately I have little control of what it does internally and I have optimized everything else (such as callbacks and event handlers) as much as I can. I can't take the component off the flow by setting display:none because it fails measuring itself if I do so. In general, does setting visibility of the container to invisible during the resize help improve DOM rendering performance?

    Read the article

  • Entity Framework query

    - by carter-boater
    Hi all, I have a piece of code that I don't know how to improve it. I have two entities: EntityP and EntityC. EntityP is the parent of EntityC. It is 1 to many relationship. EntityP has a property depending on a property of all its attached EntityC. I need to load a list of EntityP with the property set correctly. So I wrote a piece of code to get the EntityP List first.It's called entityP_List. Then as I wrote below, I loop through the entityP_List and for each of them, I query the database with a "any" function which will eventually be translated to "NOT EXIST" sql query. The reason I use this is that I don't want to load all the attached entityC from database to memory, because I only need the aggregation value of their property. But the problem here is, the looping will query the databae many times, for each EntityP! So I am wondering if anybody can help me improve the code to query the database only once to get all the EntityP.IsAll_C_Complete set, without load EntityC to memory. foreach(EntityP p in entityP_List) { isAnyNotComoplete = entities.entityC.Any(c => c.IsComplete==false && c.parent.ID == p.ID); p.IsAll_C_Complete = !isAnyNotComoplete; } Thank you very much!

    Read the article

  • A jscript variable in a Query

    - by lerac
    Maybe a very simple question. How can I put in this code <Query> <Where> <Eq> <FieldRef Name="Judge_x0020_1" /> <Value Type="Text">mr. R. Sanches</Value> </Eq> </Where> </script> </Query> A variable from jscript in the area of the code where mr. R. Sanches is written. So my jScript contains a dynamic text variable I want to replace mr. R. Sanches with. See where it says THE JAVESCRIPT VAR underneath here: <Query> <Where> <Eq> <FieldRef Name="Judge_x0020_1" /> <Value Type="Text">THE JAVASCRIPT VAR</Value> </Eq> </Where> </script> </Query>

    Read the article

  • Slow performance of System.Math library in .NET4/VS2010

    - by Niranjan
    My application compiled in .NET 4 seems to be performing really slow compared to .NET 3.5. When I did the performance analysis, I found out that the System.Math libraries in VS2010/.NET 4 have slowed down considerably. Any explanation to this? Has anyone else come across this or am I the only one seeing this? Thanks, Niranjan

    Read the article

  • GWT ScrollTable performance problem

    - by wolfi
    Hey all, I'm having a little performance problem with the gwt (incubator) ScrollTable. It's rendering really slow. Not even when I'm loading a lot of data - it happens already with a few rows. Or is it possible that the deserializing of the data takes so long? I'm using GWT 2.0 and IE. Maybe someone has the same problem or a solution for it. Thx and Happy Easter!

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >