Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 73/545 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • Two Free Training Webcasts Open for Registration

    - by KKline
    We've got two sessions that you need to sign up for right away. The upcoming webcast for Oracle-oriented folks has huge registration numbers. So get in while you still can before we hit the limit of what LiveMeeting can handle. Pain of the Week: SQL Server for the Oracle DBA Webcast: SQL Server for the Oracle DBA Date: Thursday, May 27, 2010 (Just a couple days hence!) Time: 8 a.m. Pacific / 11 a.m. Eastern / 4 p.m. United Kingdom / 5 p.m. Central Europe Duration: 45-60 minutes Cost: FREE In enterprise...(read more)

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Video

    - by Adam Machanic
    An especially clever community member was kind enough to reverse-engineer the video stream for me, and came up with a direct link to the PASS TV video stream for my Query Tuning Mastery: The Art and Science of Manhandling Parallelism talk, delivered at the PASS Summit last Thursday. I'm not sure how long this link will work , but I'd like to share it for my readers who were unable to see it in person or live on the stream. Start here. Skip past the keynote, to the 149 minute mark. Enjoy!...(read more)

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Video

    - by Adam Machanic
    An especially clever community member was kind enough to reverse-engineer the video stream for me, and came up with a direct link to the PASS TV video stream for my Query Tuning Mastery: The Art and Science of Manhandling Parallelism talk, delivered at the PASS Summit last Thursday. I'm not sure how long this link will work , but I'd like to share it for my readers who were unable to see it in person or live on the stream. Start here. Skip past the keynote, to the 149 minute mark. Enjoy!...(read more)

    Read the article

  • Looking for WAMP Benchmarking (my current WAMP is very slow, so are other solutions)

    - by therobyouknow
    I'm running ZWAMP WAMP stack on my local development machine. However I have found it to be very slow at serving pages from a Drupal site I have setup. By contrast, my live production site on shared hosting is reasonably quick. For me the goal with a local WAMP stack was to develop offline and send completed work to the live production site. I liked ZWAMP because it didn't require adjustments to User Access Control or other permissions. I've looked at Drupal Acquia Development Stack but found this too restrictive: only one site instance/doc root can be installed. I've looked at other DAMP stacks and heard reports of them being slow. My local development machine that I am running the WAMP stack on is a Dual Core 2.6Ghz hyperthreaded Intel i7, 4Gb RAM, 7200rpm hard disk, running Windows 64bit professional. Surely this is fast enough. So I'm looking for: Causes of the slowness of the WAMP and how to improve the speed Benchmark data of various WAMP stacks

    Read the article

  • Average SPA weight [on hold]

    - by Emmanuel Istace
    First, sorry my noobs questions, but I'm mainly Windows Developer and not Web Developer :) I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA ? As this is the whole website, do you think it's acceptable ? Is lazy load (for images) a solution ? What will be impact for SEO ? Is the "200kb rule" of google still relevant ? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs ? Can a caching strategy be an answer ? Thank you in advance for you help ! Best regards

    Read the article

  • Multi threaded game - updating, rendering, and how to split them

    - by CodeBunny
    From the StackOverflow post (it was recommended I move this): So, I'm working on a game engine, and I've made pretty good progress. However, my engine is single-threaded, and the advantages of splitting updating and rendering into separate threads sounds like a very good idea. How should I do this? Single threaded game engines are (conceptually) very easy to make, you have a loop where you update - render - sleep - repeat. However, I can't think of a good way to break updating and rendering apart, especially if I change their update rates (say I go through the update loop 25x a second, and have 60fps for rendering) - what if I begin updating halfway through a render loop, or vice versa?

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Demos

    - by Adam Machanic
    For the second year in a row, I was asked to deliver a 500-level "Query Tuning Mastery" talk in room 6E of the Washington State Convention Center, for the PASS Summit. ( Here's some information about last year's talk, on workspace memory. ) And for the second year in a row, I had to deliver said talk at 10:15 in the morning, in a room used as overflow for the keynote, following a keynote speaker that didn't stop speaking on time. Frustrating! Last Thursday, after very, very quickly setting up and...(read more)

    Read the article

  • Customized Database Listener Names Now Supported for EBS

    - by sreelatha.mahendra(at)oracle.com
    The database listener name can now be configured using AutoConfig with newly introduced context variable s_db_listener. Prior to this certification it was not possible to use AutoConfig generated listener.ora files for managing listeners from SRVCTL when there were multiple RAC instances on the same server.To use this feature E-Business Suite customers need to apply the following patch:11.5.10CU2 - Roll Up Patch 9535311 (RUP-U) or higher12.0.x - R12.TXK.A.delta.7 or higher 12.1.x - R12.TXK.B.delta 3 or higher

    Read the article

  • T-SQL Tuesday #4: I/O, You Know

    - by Kalen Delaney
    It's time for the fourth T-SQL Tuesday , managed this time by Mike Walsh . I almost missed this deadline completely, since I didn't see the announcement at all. I wrote to Adam to ask if there even was an event this month, since I wasn't able to get into my own blog site ( www.SQLBlog.com ) for a week, and he pointed me to Mike's site. I'm wondering if it's this hit and miss for everyone. There is no single location where those people interested in T-SQL Tuesday can find out about it. Do you just...(read more)

    Read the article

  • Automating SQL Execution Plan analysis

    - by jchang
    Last year, I made my tool for automating execution plan analysis available on www.qdpma.com The original version could parse execution plans from sys.dm_exec_query_stats or dm_exec_cached_plans and generate a cross-reference of which execution plans employed each index. The DMV sys.dm_db_index_usage_stats shows how often each index is used, but not where, that is, which particular stored procedure or My latest version can now also 1) use the DMV sys.dm_exec_procedure_stats, 2) it can also get the...(read more)

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative [closed]

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • High CPU load - Ubuntu 14.04

    - by watt
    I noticed that sometimes when browsing (with other processes in the background), I get very high CPU load for the browser process (over 100%) and the computer becomes really slow. I tried switching from Firefox (with just a few extensions) to Chromium, but same thing happens without me visiting graphics-intense sites, flash sites or anything like that. I also noticed python or node (when running "make") produce the same high CPU load from time to time so this is not necessarily browser-related. When I only have a browser open, it doesn't seem to happen and everything is fine in Windows 7. I switched from unity to gnome3 with no effect. Specs: lenovo w510 (4gb RAM, i7 q820 @ 1.73) + up to date Ubuntu 14.04 64bit. Printscreen: http://imgur.com/8MZJNKC Do you guys have any idea why this might happen? Please let me know if there's other info you need. Thanks!

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • Interesting links week #10

    - by erwin21
    Below a list of interesting links that I found this week: Interaction: The Ultimate 20 Usability Tips for Your Website Frontend: Adobe Releases Flash-to-HTML5 Converter, Codenamed Wallaby Development: 10 Tips for Decreasing Web Page Load Times Ten Things Every WordPress Plugin Developer Should Know Progressive enhancement tutorial with ASP.NET MVC 3 and jQuery Marketing: 5 Tips for SEO & User-Friendly Copy Other: Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Structuring database for multi-object "activity" and "following" functionalities

    - by romaninsh
    I am working on a web application which operate with different types of objects such as user, profiles, pages etc. All objects have unique object_id. When objects interact it may produce "activity", such as user posting on the page or profile. Activity may be related to multiple objects through their object_id. Users may also follow "objects" and they need to be able to see stream of relevant activity. Could you provide me with some data structure suggestions which would be efficient and scalable? My goal is to show activity limited to the objects which user is following I am not limited by relational databases. Update As I'm getting advices on ORM and how index things, I'd like to again, stress my question. According to my current design model the database structure looks like this: As you can see - it's quite easy to implement database like that. Activity and Follower tables do contain much larger amount of records than the upper level but it's tolerable. But when it comes for me to create a "timeline" table, it becomes a nightmare. For every user I need to reference all the object activities which he follows. In terms of records it easily gets out of control. Please suggest me how to change this structure to avoid timeline creation and also be abel to quickly retrieve activity for any given user. Thanks.

    Read the article

  • Una efficiente gestione del personale: Anche nel pubblico si può

    - by antonella.buonagurio(at)oracle.com
    Le aziende pubbliche in passato erano spesso portate come esempio della cattiva gestione del personale. La riforma Brunetta sta cercando di mettere ordine e chiarezza nei processi, ma gia alcune realtà hanno provveduto a dotarsi di strumenti che permettono una migliore gestione del personale, con molta soddisfazione di azienda e dipendenti. Fra queste va citata l'Amministrazione Provinciale dei Servizi Sanitari (APSS) di Trento che ha scelto le soluzioni Peoplesoft. Il video spiega come

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • Announcing Oracle-Demantra 12.2.1 Release

    - by user702295
    We are excited to announce Oracle Demantra 12.2.1 is now available for new and existing customers. All customers who are not incorporating Demantra with other VCP products are welcome to upgrade without any restrictions. Customers who are using Demantra in conjunction with VCP products will need to upgrade VCP to 12.2.1 which requires application and participation in Oracle E-Business Suite early adopter program. Demantra 12.2.1 includes a wide array of new features driven by customer requirements and needs. Key features include: ·       Streamlined import and export from Microsoft Excel ·       Support Gregorian Month data aggregation in weekly system ·       Multilanguage support for eleven languages ·       Promotion Calendar Optimization ·       Enhanced integration with Advanced Planning Command Center (VCP 12.2.1 Required) Demantra 12.2.1 will work with JD Edwards EnterpriseOne 9.1 using the AIA 11.4 for the Value Chain Planning Base Integration Pack. Demantra 12.2.1 will only work with VCP 12.2.1. Demantra 12.2.1 and VCP 12.2.1 will work with EBS 12.1.3 or EBS 12.2.1.

    Read the article

  • Good approach for hundreds of comsumers and big files

    - by ????? ???????
    I have several files (nearly 1GB each) with data. Data is a string line. I need to process each of these files with several hundreds of consumers. Each of these consumers does some processing that differs from others. Consumers do not write anywhere concurrently. They only need input string. After processing they update their local buffers. Consumers can easily be executed in parallel. Important: With one specific file each consumer has to process all lines (without skipping) in correct order (as they appear in file). The order of processing different files doesn't matter. Processing of a single line by one consumer is comparably fast. I expect less than 50 microseconds on Corei5. So now I'm looking for the good approach to this problem. This is going to be be a part of a .NET project, so please let's stick with .NET only (C# is preferable). I know about TPL and DataFlow. I guess that the most relevant would be BroadcastBlock. But i think that the problem here is that with each line I'll have to wait for all consumers to finish in order to post the new one. I guess that it would be not very efficient. I think that ideally situation would be something like this: One thread reads from file and writes to the buffer. Each consumer, when it is ready, reads the line from the buffer concurrently and processes it. The entry from the buffer shouldn't be deleted as one consumer reads it. It can be deleted only when all consumers have processed it. TPL schedules consumer threads itself. If one consumer outperforms the others, it shouldn't wait and can read more recent entries from the buffer. Am i right with this kind of approach? Whether yes or not, how can i implement the good solution? A bit was already discussed on StackOverflow: link

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >