Search Results

Search found 8344 results on 334 pages for 'count'.

Page 92/334 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • problem on my site [closed]

    - by ali
    Errors found while checking this document as XHTML How and from where repaired ,please help my hello, I have a problem on my site, was discovered by the program (A1 website analyzer) you can fix my site? Because I really do not know about the programming language, if it please email me and thank you please see this link it http://jigsaw.w3.org/css-validator/validator?uri=http://www.moneybillion.com/#errors Validation Output: 19 Errors Line 193, Column 261: there is no attribute "data-count" …ass="twitter-share-button" data-count="horizontal" data-via="elibeto1199"Twee…? You have used the attribute named above in your document, but the document type you are using does not support that attribute for this element. This error is often caused by incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Transitional" document type to get the "target" attribute), or by using vendor proprietary extensions such as "marginheight" (this is usually fixed by using CSS to achieve the desired effect instead). This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 283: there is no attribute "data-via" …ton" data-count="horizontal" data-via="elibeto1199"Tweet This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 393: required attribute "type" not specified …ript" src="//platform.twitter.com/widgets.js"var facebook = {? The attribute given above is required for an element that you've used, but you have omitted it. For instance, in most HTML and XHTML document types the "type" attribute is required on the "script" element and the "alt" attribute is required for the "img" element. Typical values for type are type="text/css" for and type="text/javascript" for . Line 194, Column 23: element "data:post.url" undefined url : "",? You have used the element named above in your document, but the document type you are using does not define an element of that name. This error is often caused by: incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Frameset" document type to get the "" element), by using vendor proprietary extensions such as "" or "" (this is usually fixed by using CSS to achieve the desired effect instead). by using upper-case tags in XHTML (in XHTML attributes and elements must be all lower-case). Line 197, Column 62: required attribute "type" not specified src="http://orkut-share.googlecode.com/svn/trunk/facebook.js"?

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Serialize C# dynamic object to JSON object to be consumed by javascript

    - by Jeff Jin
    Based on the example c# dynamic with XML, I modified DynamicXml.cs and parsed my xml string. the modified part is as follows public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (binder.Name == "Controls") result = new DynamicXml(_elements.Elements()); else if (binder.Name == "Count") result = _elements.Count; else { var attr = _elements[0].Attribute( XName.Get(binder.Name)); if (attr != null) result = attr.Value; else { var items = _elements.Descendants( XName.Get(binder.Name)); if (items == null || items.Count() == 0) return false; result = new DynamicXml(items); } } return true; } The xml string to parse: "< View runat='server' Name='Doc111'>" + "< Caption Name='Document.ConvertToPdf' Value='Allow Conversion to PDF'></ Caption>" + "< Field For='Document.ConvertToPdf' ReadOnly='False' DisplayAs='checkbox' EditAs='checkbox'></ Field>" + "< Field For='Document.Abstract' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.FileName' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.KeyWords' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< FormButtons SaveCaption='Save' CancelCaption='Cancel'></ FormButtons>" + "</ View>"; dynamic form = new DynamicXml(markup_fieldsOnly); is there a way to serialize the content of this dynamic object(name value pairs inside dynamic) form as JSON object and sent to client side(browser)?

    Read the article

  • Negative ItemCount in SharePoint Document Library

    - by ccomet
    What can be done about negative numbers in library item counts? ItemCount is a read-only property, what are you supposed to do when it is drastically incorrect? Earlier last week, I was doing some testing involving the copying and moving of files and folders from one document library to another. I was transfering the items from our actual document library to a sandbox "Test" library that I used to run all sorts of object model and workflow testing in before migrating to the public lists and libraries. I noticed that with files, things worked correctly, but when I copied a folder that had a file inside it (using SPFolder.CopyTo()), the item count for the test library did not actually update. Since this testing was mostly playing around, I paid it little heed. Today I was back in the test library to test a different workflow (regarding PDF conversion). While I was there, I decided to delete the folder I left last week since I didn't need it anymore. And that's when I saw the item count for the list drop to -1 in the All Site Content View. When I deleted the new PDF I had just uploaded, it then dropped to -2! I even checked with the object model... getting an instance of the library I checked the ItemCount property... lo and behold it was also -2. Is there any process which runs in the background, kinda like the one that cleans up workflow history, which will correct this kind of issue? Or is a programmer expected to keep watch for this kind of situation and come up with calculations to compensate the "count penalty", as it were?

    Read the article

  • Problem to display 3D google map.

    - by nemade-vipin
    hello friends, I have implemented one Flex application in which I want to display 3D google map.In which I am getting the error the NUll object reference during map display. Here is my code:- import com.google.maps.controls.MapTypeControl; import adobe.utils.XMLUI; import mx.rpc.events.FaultEvent; import mx.controls.Alert; import generated.webservices.*; import mx.collections.ArrayCollection; import mx.controls.*; import generated.webservices.*; import com.google.maps.LatLng; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapMouseEvent; import com.google.maps.services.GeocodingEvent; import com.google.maps.services.ClientGeocoder; import com.google.maps.overlays.Marker; import com.google.maps.InfoWindowOptions; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import mx.controls.Alert; [Bindable] private var childName:ArrayCollection; [Bindable] private var childId:ArrayCollection; private var photoFeed:ArrayCollection; private var trackinginfochild:TrackingInfo; private var arrayOfchild:Array; private var newEntry:GetSBTSMobileAuthentication; private var trackinginfo:GetSBTSTrackingInfo; private var childObj:Child; private var UserId:int; private var lat:int; private var long:int; private var latlong:LatLng; public var user:SBTSWebService; public function authentication():void { // Instantiate a new Entry object. user = new SBTSWebService(); if(user!=null) { user.addSBTSWebServiceFaultEventListener(handleFaults); user.addgetSBTSMobileAuthenticationEventListener(authenticationResult); newEntry = new GetSBTSMobileAuthentication(); if(newEntry!=null) { newEntry.mobile=mobileno.text; newEntry.password=password.text; user.getSBTSMobileAuthentication(newEntry); } } } public function handleFaults(event:FaultEvent):void { Alert.show("A fault occured contacting the server. Fault message is: " + event.fault.faultString); } public function authenticationResult(event:GetSBTSMobileAuthenticationResultEvent):void { if(event.result != null && event.result._return>0) { if(event.result._return > 0) { UserId = event.result._return; loginform.enabled = false; getChildList(UserId); viewstack2.selectedIndex=1; } else { Alert.show("Authentication fail"); } } } public function getChildList(userId:int):void { var childEntry:GetSBTSMobileChildrenInfo = new GetSBTSMobileChildrenInfo(); childEntry.UserId = userId; user.addgetSBTSMobileChildrenInfoEventListener(sbtsChildrenInfoResult); user.getSBTSMobileChildrenInfo(childEntry); } public function sbtsChildrenInfoResult(event:GetSBTSMobileChildrenInfoResultEvent):void { if( event.result._return!=null) { arrayOfchild = event.result._return as Array; photoFeed = new ArrayCollection(arrayOfchild); childName = new ArrayCollection(); for( var count:int=0;count<photoFeed.length;count++) { childObj = photoFeed.getItemAt(count,0) as Child; childName.addItem(childObj.strName); } } } private function trackingInfo():void { for( var count:int=0;count user.getSBTSTrackingInfo(trackinginfo); } } } } private function getTrackingInfo(event:GetSBTSTrackingInfoResultEvent):void { if(event.result._return != null) { trackinginfochild = event.result._return as TrackingInfo; lat = trackinginfochild.dblLatitude; long = trackinginfochild.dblLongitude; latlong = new LatLng(lat,long); } } private function onMapPreinitialize(event:MapEvent):void { var myMapOptions:MapOptions = new MapOptions(); myMapOptions.zoom = 12; myMapOptions.center = latlong; myMapOptions.mapType = MapType.NORMAL_MAP_TYPE; myMapOptions.viewMode = View.VIEWMODE_PERSPECTIVE; myMapOptions.attitude = new Attitude(20,30,0); buslocation.setInitOptions(myMapOptions); buspath.setInitOptions(myMapOptions); } private function onMapReady(event:MapEvent):void { this.buslocation.addControl(new NavigationControl()); this.buslocation.addControl( new MapTypeControl()); } ]]> <mx:Move id="hideEffect" yTo="-500" /> <mx:Move id="showEffect" xFrom="500"/> <mx:Panel width="100%" height="100%" headerColors="[#000000,#FFFFFF]" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:TabNavigator id="viewstack2" selectedIndex="0" creationPolicy="all" width="100%" height="100%"> <mx:Form label="Login Form" id="loginform" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:FormItem label="Mobile NO:" creationPolicy="all"> <mx:TextInput id="mobileno"/> </mx:FormItem> <mx:FormItem label="Password:" creationPolicy="all"> <mx:TextInput displayAsPassword="true" id="password" /> </mx:FormItem> <mx:FormItem> <mx:Button label="Login" click="authentication()"/> </mx:FormItem> </mx:Form> <mx:Form label="Child List" id="childForm" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:Label width="100%" color="blue" text="Select Child."/> <mx:RadioButtonGroup id="radioGroup" itemClick="trackingInfo()"/> <mx:Repeater id="fieldRepeater" dataProvider="{childName}"> <mx:RadioButton groupName="radioGroup" label="{fieldRepeater.currentItem}" value="{fieldRepeater.currentItem}"/> </mx:Repeater> </mx:Form> <mx:Form label="Child Information" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:FormItem> <mx:DataGrid id="childinfo"> <mx:columns> <mx:DataGridColumn headerText="Child Name" dataField="strName"/> <mx:DataGridColumn headerText="School Name" dataField="strSchoolName"/> <mx:DataGridColumn headerText="Pick Up Point" dataField="strPickUpPointName"/> <mx:DataGridColumn headerText="Drop Down Point" dataField="strDropDownPointName"/> </mx:columns> </mx:DataGrid> </mx:FormItem> <mx:FormItem> <mx:Button label="click here to see bus location"/> </mx:FormItem> </mx:Form> <mx:Form label="Bus Location" hideEffect="{hideEffect}" showEffect="{showEffect}"> <maps:Map3D xmlns:maps="com.google.maps.*" mapevent_mappreinitialize="onMapPreinitialize(event)" id="buslocation" width="100%" height="100%" url="http://code.google.com/apis/maps/" key="ABQIAAAAXuX6aG-r_N0-tQNxUEV-vRSE8al1BQssMxLXJiP75kIjR3ssLxT3D52_u94hI-dMIkD72FmnK-P4og"/> </mx:Form> <mx:Form label="Bus path" hideEffect="{hideEffect}" showEffect="{showEffect}"> <maps:Map3D xmlns:maps="com.google.maps.*" mapevent_mappreinitialize="onMapPreinitialize(event)" id="buspath" width="100%" height="100%" url="http://code.google.com/apis/maps/" key="ABQIAAAAXuX6aG-r_N0-tQNxUEV-vRSE8al1BQssMxLXJiP75kIjR3ssLxT3D52_u94hI-dMIkD72FmnK-P4og"/> </mx:Form> </mx:TabNavigator> </mx:Panel> I am getting this error :- TypeError: Error #1009: Cannot access a property or method of a null object reference. at com.google.maps.geom::Geometry$/computeSeparatingAxes() at com.google.maps.geom::TileEnumerator/enumerateTiles() at com.google.maps.core::PerspectiveTilePane/computeOptimalSet() at com.google.maps.core::PerspectiveTilePane/render() at com.google.maps.core::PerspectiveTilePane/configure() at com.google.maps.managers::PerspectiveTileManager/updateView() at com.google.maps.managers::PerspectiveTileManager/initializePanes() at com.google.maps.managers::PerspectiveTileManager/onInitialize() at MethodInfo-190() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:9298] at com.google.maps.wrappers::BaseEventDispatcher/dispatchEvent() at com.google.maps.wrappers::EventDispatcherWrapper/dispatchEvent() at com.google.maps.core::MapImpl/size() at com.google.maps.core::MapImpl/setSize() at com.google.maps.wrappers::IMapWrapper/setSize() at com.google.maps::Map/internalSetSize() at com.google.maps::Map/onUIComponentResized() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:9298] at mx.core::UIComponent/dispatchResizeEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:7077] at mx.core::UIComponent/setActualSize()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:6706] at mx.containers.utilityClasses::BoxLayout/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\containers\utilityClasses\BoxLayout.as:219] at mx.containers::Form/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\containers\Form.as:375] at mx.core::UIComponent/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:6351] at mx.core::Container/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\Container.as:2677] at mx.managers::LayoutManager/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\managers\LayoutManager.as:622] at mx.managers::LayoutManager/doPhasedInstantiation()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\managers\LayoutManager.as:695] at Function/http://adobe.com/AS3/2006/builtin::apply() at mx.core::UIComponent/callLaterDispatcher2()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:8628] at mx.core::UIComponent/callLaterDispatcher()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:8568] Please resolve my issue.

    Read the article

  • phpexcel write excel and save ?

    - by boss
    code: <?php /** PHPExcel */ require_once '../Classes/PHPExcel.php'; /** PHPExcel_IOFactory */ require_once '../Classes/PHPExcel/IOFactory.php'; // Create new PHPExcel object $objPHPExcel = new PHPExcel(); // Set properties $objPHPExcel->getProperties()->setCreator("Maarten Balliauw") ->setLastModifiedBy("Maarten Balliauw") ->setTitle("Office 2007 XLSX Test Document") ->setSubject("Office 2007 XLSX Test Document") ->setDescription("Test document for Office 2007 XLSX, generated using PHP classes.") ->setKeywords("office 2007 openxml php") ->setCategory("Test result file"); $result = 'select * from table1'; for($i=0;$i<count($result);$i++){ $result1 = 'select * from table2 where table1_id = ' . $result[$i]['table1_id']; for ($j=0;$j<count($result1);$j++) { $objPHPExcel->setActiveSheetIndex(0)->setCellValue('A' . $j, $result1[$j]['name']); } // Set active sheet index to the first sheet, so Excel opens this as the first sheet $objPHPExcel->setActiveSheetIndex(0); // Save Excel 2007 file $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel2007'); $objWriter->save(str_replace('.php', '.xlsx', __FILE__)); // Echo done echo date('H:i:s') . " Done writing file.\r\n"; } ?> The above code executes and save n no of .xlsx files in the folder, but the problem im getting is biggest count(result1) in the for loop executing in all saved excel files. Please give me the solution.

    Read the article

  • Mutation Problem - Clojure

    - by Silanglaya Valerio
    having trouble changing an element of my function represented as a list. code for random function: (defn makerandomtree-10 [pc maxdepth maxwidth fpx ppx] (if-let [output (if (and (< (rand) fpx) (> maxdepth 0)) (let [head (nth operations (rand-int (count operations))) children (doall (loop[function (list) width maxwidth] (if (pos? width) (recur (concat function (list (makerandomtree-10 pc (dec maxdepth) (+ 2 (rand-int (- maxwidth 1))) fpx ppx))) (dec width)) function)))] (concat (list head) children)) (if (and (< (rand) ppx) (>= pc 0)) (nth parameters (rand-int (count parameters))) (rand-int 100)))] output )) I will provide also a mutation function, which is still not good enough. I need to be able to eval my statement, so the following is still insufficient. (defn mutate-5 "chooses a node changes that" [function pc maxwidth pchange] (if (< (rand) pchange) (let [output (makerandomtree-10 pc 3 maxwidth 0.5 0.6)] (if (seq? output) output (list output))) ;mutate the children of root ;declare an empty accumulator list, with root as its head (let [head (list (first function)) children (loop [acc(list) walker (next function)] (println "----------") (println walker) (println "-----ACC-----") (println acc) (if (not walker) acc (if (or (seq? (first function)) (contains? (set operations) (first function))) (recur (concat acc (mutate-5 walker pc maxwidth pchange)) (next walker)) (if (< (rand) pchange) (if (some (set parameters) walker) (recur (concat acc (list (nth parameters (rand-int (count parameters))))) (if (seq? walker) (next walker) nil)) (recur (concat acc (list (rand-int 100))) (if (seq? walker) (next walker) nil))) (recur acc (if (seq? walker) (next walker) nil)))) ))] (concat head (list children))))) (side note: do you have any links/books for learning clojure?)

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • Using UIScreen to drive a VGA display - doesn't seem to show the UIWindow?

    - by Peter Hajas
    HI there, I'm trying to use UIScreen to drive a separate screen with the VGA dongle on my iPad. Here's what I've got in my root view controller's viewDidLoad: //Code to detect if an external display is connected to the iPad. NSLog(@"Number of screens: %d", [[UIScreen screens]count]); //Now, if there's an external screen, we need to find its modes, itereate through them and find the highest one. Once we have that mode, break out, and set the UIWindow. if([[UIScreen screens]count] > 1) //if there are more than 1 screens connected to the device { CGSize max; UIScreenMode *maxScreenMode; for(int i = 0; i < [[[[UIScreen screens] objectAtIndex:1] availableModes]count]; i++) { UIScreenMode *current = [[[[UIScreen screens]objectAtIndex:1]availableModes]objectAtIndex:i]; if(current.size.width > max.width); { max = current.size; maxScreenMode = current; } } //Now we have the highest mode. Turn the external display to use that mode. UIScreen *external = [[UIScreen screens] objectAtIndex:1]; external.currentMode = maxScreenMode; //Boom! Now the external display is set to the proper mode. We need to now set the screen of a new UIWindow to the external screen external_disp = [externalDisplay alloc]; external_disp.drawImage = drawViewController.drawImage; UIWindow *newwindow = [UIWindow alloc]; [newwindow addSubview:external_disp.view]; newwindow.screen = external; }

    Read the article

  • Ruby - Feedzirra and updates

    - by mplacona
    Hi, trying to get my head around Feedzirra here. I have it all setup and everything, and can even get results and updates, but something odd is going on. I came up with the following code: def initialize(feed_url) @feed_url = feed_url @rssObject = Feedzirra::Feed.fetch_and_parse(@feed_url) end def update_from_feed_continuously() @rssObject = Feedzirra::Feed.update(@rssObject) if @rssObject.updated? puts @rssObject.new_entries.count else puts "nil" end end Right, what I'm doing above, is starting with the big feed, and then only getting updates. I'm sure I must be doing something stupid, as even though I'm able to get the updates, and store them on the same instance variable, after the first time, I'm never able to get those again. Obviously this happens because I'm overwriting my instance variable with only updates, and lose the full feed object. I then thought about changing my code to this: def update_from_feed_continuously() feed = Feedzirra::Feed.update(@rssObject) if feed.updated? puts feed.new_entries.count else puts "nil" end end Well, I'm not overwriting anything and that should be the way to go right? WRONG, this means I'm doomed to always try to get updates to the same static feed object, as although I get the updates on a variable, I'm never actually updating my "static feed object", and newly added items will be appended to my "feed.new_entries" as they in theory are new. I'm sure I;m missing a step here, but I'd really appreciate if someone could shed me a light on it. I've been going through this code for hours, and can't get to grips with it. Obviously it should work fine, if I did something like: if feed.updated? puts feed.new_entries.count @rssObject = initialize(@feed_url) else Because that would reinitialize my instance variable with a brand new feed object, and the updates would come again. But that also means that any new update added on that exact moment would be lost, as well as massive overkill, as I'd have to load the thing again. Thanks in advance!

    Read the article

  • Weird RecordStore behavior when RecordStoreFullException occurs

    - by Michael P
    Hello everyone! I'm developing a small J2ME application that displays bus stops timetables - they are stored as records in MIDP RecordStores. Sometimes records can not fit a single RecordStore, especially on record update - using setRecord method - a RecordStoreFullException occurs. I catch the exception, and try to write the record to a new RecordStore along with deleting the previous one in the old RecordStore. Everything works fine except of deleting record from RecordStore where the RecordStoreFullException occurs. If I make an attempt to delete record that could not be updated, another Exception of type InvalidRecordIDException is thrown. This is weird and undocumented in MIDP javadoc. I have tested it on Sun WTK 2.5.2, MicroEdition SDK 3.0 and Nokia Series 40 SDK. Furthermore I created a code that reproduces this strange behaviour: RecordStore rms = null; int id = 0; try { rms = RecordStore.openRecordStore("Test", true); byte[] raw = new byte[192*10024]; //Big enough to cause RecordStoreFullException id = rms.addRecord(raw, 0, 160); rms.setRecord(id, raw, 0, raw.length); } catch (Exception e) { try { int count = rms.getNumRecords(); RecordEnumeration en = rms.enumerateRecords(null, null, true); count = en.numRecords(); while(en.hasNextElement()){ System.out.println("NextID: "+en.nextRecordId()); } rms.deleteRecord(id); //this won't work! rms.setRecord(id, new byte[5], 0, 5); //this won't work too! } catch (Exception ex) { ex.printStackTrace(); } } I added extra enumeration code to produce other weird behavior - when RecordStoreFullException occurs, count variable will be set to 1 (if RMS was empty) by both methods - getNumRecords and numRecords. System.out.println will produce NextID: 0! It is not acceptable because record ID can not be 0! Could someone explain this strange behavior? Sorry for my bad English.

    Read the article

  • UIImagePickerController does nothing when using camera after I hit "Use" button

    - by wgpubs
    Code below. When I hit the "Use" button after taking a picture ... the application becomes totally unresponsive. Any ideas what I'm doing wrong? The "addPlayer:" method is called when a button is pressed on the UIViewController's view. Thanks - (IBAction) addPlayers: (id)sender{ // Show ImagePicker UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init]; imagePicker.delegate = self; // If camera is available use it and display custom overlay view so that user can add as many pics // as they want without having to go back to parent view if([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera]) { imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera; } else { imagePicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; } [self presentModalViewController:imagePicker animated:YES]; [imagePicker release]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { // Grab original image UIImage *photo = [info objectForKey:UIImagePickerControllerOriginalImage]; // Resize photo first to reduce memory consumption [self.photos addObject:[photo scaleToSize:CGSizeMake(200.0f, 300.0f)]]; // Enable *PLAY* button if photos > 1 if([self.photos count] > 1) btnStartGame.enabled = YES; // Update player count label lblPlayerCount.text = [NSString stringWithFormat:@"%d", [self.photos count]]; // Dismiss picker if not using camera picker dismissModalViewControllerAnimated:YES]; }

    Read the article

  • EF4 querying from parent to grandchildren

    - by Hans Kesting
    I have a model withs Parents, Children and Grandchildren, in a many-to-many relationship. Using this article I created POCO classes that work fine, except for one thing I can't yet figure out. When I query the Parents or Children directly using LINQ, the SQL reflects the LINQ query (a .Count() executes a COUNT in the database and so on) - fine. The Parent class has a Children property, to access it's children. But (and now for the problem) this doesn't expose an IQueryable interface but an ICollection. So when I access the Children property on a particular parent all the Parent's Children are read. Even worse, when I access the Grandchildren (theParent.Children.SelectMany(child => child.GrandChildren).Count()) then for each and every child a separate request is issued to select all data of it's grandchildren. That's a lot of separate queries! Changing the type of the Children property from ICollection to IQueryable doesn't solve this. Apart from missing methods I need, like Add() and Remove(), EF just doesn't recognize the navigation property then. Are there correct ways (as in: low database interaction) of querying through children (and what are they)? Or is this just not possible?

    Read the article

  • Android addTextChangedListener onTextChanged not fired when Backspace is pressed?

    - by tsil
    I use addTextChangeListener to filter a list items. When the user enters a character on the editText, items are filtered based on user input. For example, if the user enters "stackoverflow", all items that contains "stackoverflow" are displayed. It works fine except that once the user press backspace to delete character(s), items are no longer filtered until he deletes all characters. For example, my items are: "stackoverflow 1", "stackoverflow 2", "stackoverflow 3", "stackoverflow 4". If user input is "stackoverflow", all items are displayed. If user input is "stackoverflow 1", only "stackoverflow 1" is displayed. Then user deletes the last 2 characters (1 and the space). User input is now "stackoverflow" but "stackoverflow 1" is still displayed and the other items are not displayed. This is my custom filter: private class ServiceFilter extends Filter { @Override protected FilterResults performFiltering(CharSequence constraint) { FilterResults results = new FilterResults(); if (constraint == null || constraint.length() == 0) { // No filter implemented we return all the list results.values = origServiceList; results.count = origServiceList.size(); } else { List<ServiceModel> nServiceList = new ArrayList<ServiceModel>(); for (ServiceModel s : serviceList) { if (s.getName().toLowerCase().contains(constraint.toString().toLowerCase())) { nServiceList.add(s); } } results.values = nServiceList; results.count = nServiceList.size(); } return results; } @Override protected void publishResults(CharSequence constraint, FilterResults results) { if (results.count == 0) { notifyDataSetInvalidated(); } else { serviceList = (List<ServiceModel>) results.values; notifyDataSetChanged(); } } } And how I handle text change event: inputSearch.addTextChangedListener(new TextWatcher() { @Override public void afterTextChanged(Editable arg0) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence cs, int arg1, int arg2, int arg3) { if (cs.length() >= 1) { mAdapter.getFilter().filter(cs); } else { mAdapter = new ServiceAdapter(MyActivity.this, R.layout.ussd_list_item, serviceList); listView.setAdapter(mAdapter); } } });

    Read the article

  • XML deserialization doubling up on entities

    - by Nathan Loding
    I have an XML file that I am attempting to deserialize into it's respective objects. It works great on most of these objects, except for one item that is being doubled up on. Here's the relevant portion of the XML: <Clients> <Client Name="My Company" SiteID="1" GUID="xxx-xxx-xxx-xxx"> <Reports> <Report Name="First Report" Path="/Custom/FirstReport"> <Generate>true</Generate> </Report> </Reports> </Client> </Clients> "Clients" is a List<Client> object. Each Client object has a List<Report> object within it. The issue is that when this XML is deserialized, the List<Report> object has a count of 2 -- the "First Report" Report object is in there twice. Why? Here's the C#: public class Client { [System.Xml.Serialization.XmlArray("Reports"), System.Xml.Serialization.XmlArrayItem(typeof(Report))] public List<Report> Reports; } public class Report { [System.Xml.Serialization.XmlAttribute("Name")] public string Name; public bool Generate; [System.Xml.Serialization.XmlAttribute("Path")] public string Path; } class Program { static void Main(string[] args) { List<Client> _clients = new List<Client>(); string xmlFile = "myxmlfile.xml"; System.Xml.Serialization.XmlSerializer xmlSerializer = new System.Xml.Serialization.XmlSerializer(typeof(List<Client>), new System.Xml.Serialization.XmlRootAttribute("Clients")); using (FileStream stream = new FileStream(xmlFile, FileMode.Open)) { _clients = xmlSerializer.Deserialize(stream) as List<Client>; } foreach(Client _client in _clients) { Console.WriteLine("Count: " + _client.Reports.Count); // This write "2" foreach(Report _report in _client.Reports) { Console.WriteLine("Name: " + _report.Name); // Writes "First Report" twice } } } }

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • Read binary file into a struct C#

    - by Robert Höglund
    I'm trying to read binary data using C#. I have all information about the layout of the data in the files I want to read. I'm able to read the data "chunk by chunk", i.e. getting the first 40 bytes of data converting it to a string, get the next 40 bytes, ... Since there are at least three slighlty different version of the data, I would like to read the data directly into a struct. It just feels so much more right than by reading it "line by line". I have tried the following approach but to no avail:StructType aStruct; int count = Marshal.SizeOf(typeof(StructType)); byte[] readBuffer = new byte[count]; BinaryReader reader = new BinaryReader(stream); readBuffer = reader.ReadBytes(count); GCHandle handle = GCHandle.Alloc(readBuffer, GCHandleType.Pinned); aStruct = (StructType) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(StructType)); handle.Free(); The stream is an opened FileStream from which I have began to read from. I get an AccessViolationException when using Marshal.PtrToStructure. The stream contains more information than I'm trying to read since I'm not interested in data at the end of the file. The struct is defined like:[StructLayout(LayoutKind.Explicit)] struct StructType { [FieldOffset(0)] public string FileDate; [FieldOffset(8)] public string FileTime; [FieldOffset(16)] public int Id1; [FieldOffset(20)] public string Id2; } The examples code is changed from original to make this question shorter. How would I read binary data from a file into a struct?

    Read the article

  • Making pascal's triangle with mpz_t's

    - by SDLFunTimes
    Hey, I'm trying to convert a function I wrote to generate an array of longs that respresents Pascal's triangles into a function that returns an array of mpz_t's. However with the following code: mpz_t* make_triangle(int rows, int* count) { //compute triangle size using 1 + 2 + 3 + ... n = n(n + 1) / 2 *count = (rows * (rows + 1)) / 2; mpz_t* triangle = malloc((*count) * sizeof(mpz_t)); //fill in first two rows mpz_t one; mpz_init(one); mpz_set_si(one, 1); triangle[0] = one; triangle[1] = one; triangle[2] = one; int nums_to_fill = 1; int position = 3; int last_row_pos; int r, i; for(r = 3; r <= rows; r++) { //left most side triangle[position] = one; position++; //inner numbers mpz_t new_num; mpz_init(new_num); last_row_pos = ((r - 1) * (r - 2)) / 2; for(i = 0; i < nums_to_fill; i++) { mpz_add(new_num, triangle[last_row_pos + i], triangle[last_row_pos + i + 1]); triangle[position] = new_num; mpz_clear(new_num); position++; } nums_to_fill++; //right most side triangle[position] = one; position++; } return triangle; } I'm getting errors saying: incompatible types in assignment for all lines where a position in the triangle is being set (i.e.: triangle[position] = one;). Does anyone know what I might be doing wrong?

    Read the article

  • Problem with WCF Streaming

    - by H4mm3rHead
    Hi, I was looking at this thread: http://stackoverflow.com/questions/1935040/how-to-handle-large-file-uploads-via-wcf I need to have a web service hosted at my provider where i need to upload and download files to. We are talking videos from 1Mb to 100Mb hence the streaming approach. I cant get it to work, i declared an Interface: [ServiceContract] public interface IFileTransferService { [OperationContract] void UploadFile(Stream stream); } and all is fine, i implement it like this: public string FileName = "test"; public void UploadFile(Stream stream) { try { FileStream outStream = File.Open(FileName, FileMode.Create, FileAccess.Write); const int bufferLength = 4096; byte[] buffer = new byte[bufferLength]; int count = 0; while((count = stream.Read(buffer, 0, bufferLength)) > 0) { //progress outStream.Write(buffer, 0, count); } outStream.Close(); stream.Close(); //saved } catch(Exception ex) { throw new Exception("error: "+ex.Message); } } Still no problem, its published to my webserver out on the interweb. So far so good. Now i make a reference to it and will pass it a FileStream, but the argument is now a byte[] - why is that and how do i get it the proper way for streaming? Edit My binding look like this: <bindings> <basicHttpBinding> <binding name="StreamingFileTransferServicesBinding" transferMode="StreamedRequest" maxBufferSize="65536" maxReceivedMessageSize="204003200" /> </basicHttpBinding> </bindings> I can consume it without problems, and get no errors - other than my input parameter has changed from a stream to a byte[]

    Read the article

  • Converting PDF to images using ImageMagick.NET - how to set the DPI

    - by Avi Pinto
    Hi, I'm trying to convert pdf files to images. ImageMagick is a great tool, and using the command line tool gets me desired result. but i need to do this in my code, So added a reference to http://imagemagick.codeplex.com/ And the following code sample renders each page of the pdf as an image: MagickNet.InitializeMagick(); using (ImageList im = new ImageList()) { im.ReadImages(@"E:\Test\" + fileName + ".pdf"); int count = 0; foreach (Image image in im) { image.Quality = 100; image.CompressType = mageMagickNET.CompressionType.LosslessJPEGCompression; image.Write(@"E:\Test\" + fileName + "-" + count.ToString() + ".jpg"); ++count; } } The problem: IT LOOKS LIKE CRAP the rendered image is hardly readable. the problem i realized is it uses the default 72 DPI of ImageMagick. and i can't find a way to set it(96dpi or 120dpi gives good results) via the .Net wrapper. Am I missing something , or there is really no way to set it via this wrapper? Thanks

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >