Search Results

Search found 8354 results on 335 pages for 'count boxer'.

Page 37/335 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How to count how many items for distinct items in mysql?

    - by Vincent Duprez
    Imagine a have a table with a column named status: status ------ A A A B C C D D D How can I count how many rows have A, how many rows have B etc? this kind of output: A |B |C |D |E ------------------ 3 |1 |2 |3 |0 As for E = O , this will always be A,B,C,D and E Output should be one row (thus 1 query). When doing a distinct count (most returning answer on my searches, it does return how many different elements there are, 4 in this case...)

    Read the article

  • How can I show a count of rows from an ng-repeat?

    - by Anne
    I have a table on my web page that is populated with data like this: <tr data-ng-repeat="row in grid.data | filter:isQuestionInRange"> <td>{{ row.problemId }}</td> </tr> Is there a way that I can put a count of the rows displayed in the table footer. Note that I want to be able to show the rows after that have been filtered not just the row count from the grid.data array.

    Read the article

  • Mysql select most frequent and sort alphabetically

    - by user2605793
    I am trying to select the most common 100 names from a table then display the list showing the names and count. I want the user to be able to re-sort the list alphabetically rather than based on count. I thought the following code would do it. It works for the default sort by count but fails on the sort alphabetically. The line "$count = mysql_num_rows($table);" gives an error: mysql_num_rows() expects parameter 1 to be resource, boolean given. Any help would be greatly appreciated. // Get most popular surnames echo '<h2>Most Common Surnames</h2>'; if ($sort == "") { // default sort by count echo '<a href="http://mysite/names.php?id='.$id.'&sort=name">Sort by name</a><br>'; $query = "SELECT family_name, COUNT(*) as count FROM namefile WHERE record_key = $id GROUP BY family_name ORDER BY count DESC LIMIT 100"; } else { // sort alphabetically echo '<a href="http://mysite/names.php?id='.$id.'">Sort by count</a><br>'; $query = "SELECT * FROM ( SELECT family_name, COUNT(*) as count FROM namefile WHERE record_key = $id GROUP BY family_name ORDER BY count DESC LIMIT 100) AS alpha ORDER BY family_name"; } $table = mysql_query($query); $count = mysql_num_rows($table); $tot = 0; $i = 0; echo '<table><tr>'; while ($tot < $count2) { $rec2 = mysql_fetch_array($table2); echo '<td>'.$rec2[0].'</td><td>'.$rec2[1].'</td><td width="40">&nbsp;</td><td>'; if ($i++ == 6) { echo '</tr><tr>'; $i = 0; } $tot++; } echo '</tr></table><br>'; UPDATE: I needed to add "AS alpha" to give the outer select a unique name. (alpha is just a random name I made up.) It now works perfectly. Code updated for the benefit of any others who need something similar.

    Read the article

  • How to count time securely in a Flash game?

    - by user352353
    Hello. I'm developing a Flash game in ActionScript 2, and the issue if that this game has to count the time securely. It can't count the time from the Date class because the Flash Player takes the time from the local computer, and the user can change the local time so the time reported would be fake. I haven't considerend to take the time from the server because there's a 3WH (3 way handshake) time and it would not be practical. What do you sugest me??

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • Do you count a Masters in CS as a negative?

    - by Pete Hodgson
    In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.

    Read the article

  • Not Provide count has reached 80%! But what about remaining 20?

    - by Rajesh Magar
    I am updated with all not-provided reasons as Google has encrypted their all searches, but here is the little question banging again and again in my head. That if all search results has encrypted with HTTPS protocol then how did Google analytics still able to track some of (20%) organic keywords details? I means their still some keywords appreading in my organic keywords section. So how did Google analytics track or bypass that HTTPS thing? Thanks in advance!

    Read the article

  • One site being on a subdirectory of another. Does google count this against you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory. EDIT: Just in case its important... say the popular site is called pop.com and the unpopular one unpop.com. In the webspace I've purchased, there is a directory called public_html. This is where I put the index.htm and all the other files of my popular site. When I purchased the add-on unpop.com. I made a subdirectory of public_html called unpop. It is within this "public_html\unpop\" that I place the index.htm and all the other files of my unpopular site. Typing www.unpop.com into the address bar of a browser links directly to the contents of "public_html\unpop\" and the user is not aware that this site is sitting on a subdirectory of another site. BUT if you type "www.pop.com/unpop" into the address bar of a browser you DO see the unpopular site.

    Read the article

  • Do you count a Masters in CS as a negative? [closed]

    - by Pete Hodgson
    In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.

    Read the article

  • Does SEO optimisation count on the responsive side of a site?

    - by Rick Donohoe
    I'm looking at making some SEO optimisation fixes, and at this point I'm sorting out the heading structure and keywords - H1's, H2's etc We have a site where there are a number of similar blocks, and one is always visible, and one is hidden depending on the screen size. This is our method of making a single site responsive. Firstly, how does this technique affect the SEO, and in general does the responsive side of a site matter at all to search engines? What I mean by this is if the site has different content depending on screen sizes, then which content would the search spider crawl?

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • When calculating how many days between 2 dates, should you include both dates in the count, or neither, or 1?

    - by Andy
    I hope this question is alright to ask here. I am trying to make an algorithm that counts how many days between 2 dates. For example, 3/1/2012 and 3/2/2012. Whats the correct answer, or the most popular choice, and should be the one I use? So in this case, if I don't include both dates I am comparing, its 0. If I include one of them (both the start date), its 1. Lastly, if I include both, its 2. Thanks.

    Read the article

  • Does using a PHP framework count as experience using PHP to a company that doesn't use that framework?

    - by sq1020
    I've started working at a company that uses the Yii PHP framework. I'm mostly using Yii but also some frontend stuff like jQuery and Ajax. What I'm worried about is limiting my skill set to a framework that isn't very popular. I mean, if the company I worked for was using Ruby on Rails or even Django, I wouldn't have this feeling of concern for the future. My first question is then, in regards to being able to find a job in the future somewhere else, is my feeling of concern warranted? Secondly, I see a lot of PHP jobs out there but do you think experience using a PHP framework counts as valuable experience to a company that doesn't use that particular framework or any framework at all?

    Read the article

  • One site being on a subdirectory of another. Does google count this againt you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory.

    Read the article

  • Encapsulating code in F# (Part 1)

    - by MarkPearl
    I have been looking at F# for a while now and seem a few really interesting samples and snippets on howto’s. This has been great to see the basic outline of the language and the possibilities, however a nagging question in the back of my mind has been what does an F# project look like? How do I code group code in F# so that it can be modularized and brought in and out of a project easily? My Expert F# book has an entire chapter (7) dedicated to this and after browsing the other chapters of the book I decided that this topic was something I really wanted to know more about now! Because of my C# background I keep trying to think in F# of objects. So to try and get a clearer idea of how to do things the F# way I am first going to take a very simplified C# example and try to “translate” it. using System; namespace ConsoleApplication1 { namespace ExampleOfEncapsulationInCSharp { class Program { static void EncapsulatedVariableInAMethod() { int count = 10; Console.WriteLine(count); } static void Main(string[] args) { EncapsulatedVariableInAMethod(); Console.ReadLine(); } } } } From the above example the count integer is encapsulated within EncapsulatedVariableInAMethod method. You couldn’t access the count variable from outside the scope of its parent method but have full access to it within the method. Lets look at my F# equivalent… open System let EncapsulatedVariableInAMethod = let count = 10 Console.WriteLine(count) () EncapsulatedVariableInAMethod Console.ReadLine()   Now, when I first attempted to write the F# code I got stuck… I didn’t have the Console.WriteLine calls but had the following… open System let EncapsulatedVariableInAMethod = let count = 10 EncapsulatedVariableInAMethod Console.ReadLine()   The compiler didn’t like the let before the count = 10. This is because every F# expression must evaluate to a value. If I did not want to make the Console call, I would still need to evaluate the expression to something – and for this reason the Unit Type is provided. I could have done something like…. open System let EncapsulatedVariableInAMethod = let count = 10 () EncapsulatedVariableInAMethod Console.ReadLine()   Which the compiler would be happy with…

    Read the article

  • Does a large (hidden) submenu count towards site content in tems of determining page similarities?

    - by Name
    Basically, I have this site that recently lost a lot of traffic after I optimized the html, the exact reasons to which are uncertain. The graph of impressions (times a page appears on search listings) is continuously going down like an e^-x function. Because the content, previously occupying five pages of tables, now fits within a few paragraph tags, the menu now occupies about 80% of the live html code and I am starting to have doubts wherether this affects the "similar pages" factor that Google punishes. Questions: As far as I know, Google ignores invisible material and the submenus are only visible when hovered over. Has anything at all changed in this area? If I ajax in the submenus, leaving only the main eight menu items to load, will I be punished for "hiding" information? Is the idea worth testing or is it frankly retarded?

    Read the article

  • Why does calling IEnumerable<string>.Count() create an additional assembly dependency ?

    - by Gishu
    Assume this chain of dll references Tests.dll >> Automation.dll >> White.Core.dll with the following line of code in Tests.dll, where everything builds result.MissingPaths Now when I change this to result.MissingPaths.Count() I get the following build error for Tests.dll "White.UIItem is not defined in an assembly that is not referenced. You must add a reference to White.Core.dll." And I don't want to do that because it breaks my layering. Here is the type definition for result, which is in Automation.dll public class HasResult { public HasResult(IEnumerable<string> missingPaths ) { MissingPaths = missingPaths; } public IEnumerable<string> MissingPaths { get; set; } public bool AllExist { get { return !MissingPaths.Any(); } } } Down the call chain the input param to this ctor is created via (The TreeNode class is in White.Core.dll) assetPaths.Where(assetPath => !FindTreeNodeUsingCache(treeHandle, assetPath)); Why does this dependency leak when calling Count() on IEnumerable ? I then suspected that lazy evaluation was causing this (for some reason) - so I slotted in an ToArray() in the above line but didn't work. Update 2011 01 07: Curiouser and Curiouser! it won't build until I add a White.Core reference. So I add a reference and build it (in order to find the elusive dependency source). Open it up in Reflector and the only references listed are Automation, mscorlib, System.core and NUnit. So the compiler threw away the White reference as it was not needed. ILDASM also confirms that there is no White AssemblyRef entry. Any ideas on how to get to the bottom of this thing (primarily for 'now I wanna know why' reasons)? What are the chances that this is an VS2010/MSBuild bug? Update 2011 01 07 #2 As per Shimmy's suggestion, tried calling the method explcitly as an extension method Enumerable.Count(result.MissingPaths) and it stops cribbing (not sure why). However I moved some code around after that and now I'm getting the same issue at a different location using IEnumerable - this time reading and filtering lines out of a file on disk (totally unrelated to White). Seems like it's a 'symptom-fix'. var lines = File.ReadLines(aFilePath).ToArray(); once again, if I remove the ToArray() it compiles again - it seems that any method that causes the enumerable to be evaluated (ToArray, Count, ToList, etc.) causes this. Let me try and get a working tiny-app to demo this issue... Update 2011 01 07 #3 Phew! More information.. It turns out the problem is just in one source file - this file is LINQ-phobic. Any call to an Enumerable extension method has to be explicitly called out. The refactorings that I did caused a new method to be moved into this source file, which had some LINQ :) Still no clue as to why this class dislikes LINQ. using System; using System.Collections.Generic; using System.IO; using System.Linq; using G.S.OurAutomation.Constants; using G.S.OurAutomation.Framework; using NUnit.Framework; namespace G.S.AcceptanceTests { public abstract class ConfigureThingBase : OurTestFixture { .... private static IEnumerable<string> GetExpectedThingsFor(string param) { // even this won't compile - although it compiles fine in an adjoining source file in the same assembly //IEnumerable<string> s = new string[0]; //Console.WriteLine(s.Count()); // this is the line that is now causing a build failure // var expectedInfo = File.ReadLines(someCsvFilePath)) // .Where(line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)) // .Select(line => line.Replace("%PLACEHOLDER%", param)) // .ToArray(); // Unrolling the LINQ above removes the build error var expectedInfo = Enumerable.ToArray( Enumerable.Select( Enumerable.Where( File.ReadLines(someCsvFilePath)), line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)), line => line.Replace("%PLACEHOLDER%", param)));

    Read the article

  • Refresh page isnt working in asp.net using treeview

    - by Greg
    Hi, I am trying to refresh an asp.net page using this command: <meta http-equiv="Refresh" content="10"/> On that page I have 2 treeviews. The refresh works ok when I just open the page, but when I click on one of the treeviews and expand it, the refresh stopps working and the page isnt being refreshed. Any ideas why this can happen? Is there any connection to the treeview being expanded? Here is the full code of the page: public partial class Results : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } // Function that moves reviewed yellow card to reviewed tree protected void ycActiveTree_SelectedNodeChanged(object sender, EventArgs e) { ycActiveTree.SelectedNode.Text = "Move To Active"; ycReviewedTree.PopulateNodesFromClient = false; ycReviewedTree.Nodes[ycReviewedTree.Nodes.Count - 1].ChildNodes.Add(ycActiveTree.SelectedNode.Parent); Application["reviewedTree"] = new ArrayList(); int count = ((ArrayList)Application["activeTree"]).Count; // Move all the nodes from activeTree application to reviewedTree application for (int i = 0; Application["activeTree"] != null && i < count; i++) { ((ArrayList)Application["reviewedTree"]).Add(((ArrayList)Application["activeTree"])[i]); ((ArrayList)Application["activeTree"]).RemoveAt(0); } } protected void ycActiveTree_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (Application["idList"] != null && e.Node.Depth == 0) { string[] words = ((String)Application["idList"]).Split(' '); // Yellow Card details TreeNode child = new TreeNode(""); // Go over all the yellow card details and populate the treeview for (int i = 1; i < words.Length; i++) { child.SelectAction = TreeNodeSelectAction.None; // Same yellow card if (words[i] != "*") { // End of details and start of point ip's if (words[i] == "$") { // Add the yellow card node TreeNode yellowCardNode = new TreeNode(child.Text); yellowCardNode.SelectAction = TreeNodeSelectAction.Expand; e.Node.ChildNodes.Add(yellowCardNode); child.Text = ""; } // yellow card details else { child.Text = child.Text + words[i] + " "; } } // End of yellow card else { child.PopulateOnDemand = false; child.SelectAction = TreeNodeSelectAction.None; // Populate the yellow card node e.Node.ChildNodes[e.Node.ChildNodes.Count - 1].ChildNodes.Add(child); TreeNode moveChild = new TreeNode("Move To Reviewed"); moveChild.PopulateOnDemand = false; moveChild.SelectAction = TreeNodeSelectAction.Select; e.Node.ChildNodes[e.Node.ChildNodes.Count - 1].ChildNodes.Add(moveChild); child = new TreeNode(""); Application["activeTree"] = new ArrayList(); ((ArrayList)Application["activeTree"]).Add(e.Node.ChildNodes[e.Node.ChildNodes.Count - 1]); } } } // If there arent new yellow cards else if (Application["activeTree"] != null) { // Populate the active tree for (int i = 0; i < ((ArrayList)Application["activeTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["activeTree"])[i]); } } // If there were new yellow cards and nodes that moved from reviewed tree to active tree if (Application["idList"] != null && Application["activeTree"] != null && e.Node.ChildNodes.Count != ((ArrayList)Application["activeTree"]).Count) { for (int i = e.Node.ChildNodes.Count; i < ((ArrayList)Application["activeTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["activeTree"])[i]); } } // Nullify the yellow card id's Application["idList"] = null; } protected void ycReviewedTree_SelectedNodeChanged(object sender, EventArgs e) { ycActiveTree.PopulateNodesFromClient = false; ycReviewedTree.SelectedNode.Text = "Move To Reviewed"; ycActiveTree.Nodes[ycActiveTree.Nodes.Count - 1].ChildNodes.Add(ycReviewedTree.SelectedNode.Parent); int count = ((ArrayList)Application["reviewedTree"]).Count; // Move all the nodes from reviewedTree application to activeTree application for (int i = 0; Application["reviewedTree"] != null && i < count; i++) { ((ArrayList)Application["activeTree"]).Add(((ArrayList)Application["reviewedTree"])[i]); ((ArrayList)Application["reviewedTree"]).RemoveAt(0); } } protected void ycReviewedTree_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (Application["reviewedTree"] != null) { // Populate the reviewed tree for (int i = 0; i < ((ArrayList)Application["reviewedTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["reviewedTree"])[i]); } } } } Thanks, Greg

    Read the article

  • Program that can count the most frequently used keys pressed on keyboard?

    - by sunpech
    Any recommendations on a program that I can manually start/stop to run in the background that will measure (count) how many times a key is pressed on the keyboard? As a software developer, I'd like to know which keys are used the most when I develop from project to project and/or from language to language. I'd like to be able to manually start and stop the program. This is for a Windows PC (xp, vista, 7)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >