Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 295/330 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • how to find maximum frequent item sets from large transactional data file

    - by ANIL MANE
    Hi, I have the input file contains large amount of transactions like Transaction ID Items T1 Bread, milk, coffee, juice T2 Juice, milk, coffee T3 Bread, juice T4 Coffee, milk T5 Bread, Milk T6 Coffee, Bread T7 Coffee, Bread, Juice T8 Bread, Milk, Juice T9 Milk, Bread, Coffee, T10 Bread T11 Milk T12 Milk, Coffee, Bread, Juice i want the occurrence of every unique item like Item Name Count Bread 9 Milk 8 Coffee 7 Juice 6 and from that i want an a fp-tree now by traversing this tree i want the maximal frequent itemsets as follows The basic idea of method is to dispose nodes in each “layer” from bottom to up. The concept of “layer” is different to the common concept of layer in a tree. Nodes in a “layer” mean the nodes correspond to the same item and be in a linked list from the “Head Table”. For nodes in a “layer” NBN method will be used to dispose the nodes from left to right along the linked list. To use NBN method, two extra fields will be added to each node in the ordered FP-Tree. The field tag of node N stores the information of whether N is maximal frequent itemset, and the field count’ stores the support count information in the nodes at left. In Figure, the first node to be disposed is “juice: 2”. If the min_sup is equal to or less than 2 then “bread, milk, coffee, juice” is a maximal frequent itemset. Firstly output juice:2 and set the field tag of “coffee:3” as “false” (the field tag of each node is “true” initially ). Next check whether the right four itemsets juice:1 be the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 set the field tag of the node “false”. In the following process when the field tag of the disposed node is FALSE we can omit the node after the same tagging. If the min_sup is more than 2 then check whether the right four juice:1 is the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 then set the field count’ of the node with the sum of the former count’ and 2 After all the nodes “juice” disposed ,begin to dispose the node “coffee:3”. Any suggestions or available source code, welcome. thanks in advance

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Can't make my WCF extension work

    - by Sergio Romero
    I have a WCF solution that consists of the following class libraries: Exercise.Services: Contains the implementation classes for the services. Exercise.ServiceProxy: Contains the classes that are instantiated in the client. Exercise.HttpHost: Contains the services (*.svc files). I'm calling the service from a console application and the "first version" works really well so I took the next step which is to create a custom ServiceHostFactory, ServiceHost, and InstanceProvider so I can use constructor injection in my services as it is explained in this article. These classes are implemented in yet another class library: 4. Exercise.StructureMapWcfExtension Now even though I've modified my service this: <%@ ServiceHost Language="C#" Debug="true" Factory="Exercise.StructureMapWcfExtension.StructureMapServiceHostFactory" Service="Exercise.Services.PurchaseOrderService" %> I always get the following exception: System.ServiceModel.CommunicationException Security negotiation failed because the remote party did not send back a reply in a timely manner. This may be because the underlying transport connection was aborted. It fails in this line of code: public class PurchaseOrderProxy : ClientBase<IPurchaseOrderService>, IPurchaseOrderService { public PurchaseOrderResponse CreatePurchaseOrder(PurchaseOrderRequest purchaseOrderRequest) { return base.Channel.CreatePurchaseOrder(purchaseOrderRequest); //Fails here } } But that is not all, I added a trace to the web.config file and this is the error that appears in the log file: System.InvalidOperationException The service type provided could not be loaded as a service because it does not have a default (parameter-less) constructor. To fix the problem, add a default constructor to the type, or pass an instance of the type to the host. So this means that my ServiceHostFactory is never being hit, I even set a breakpoint in both its constructor and its method and they never get hit. I've added a reference of the StructureMapWcfExtension library to all the other ones (even the console client), one by one to no avail. I also tried to use the option in the host's web.config file to configure the factory like so: <serviceHostingEnvironment> <serviceActivations> <add service="Exercise.Services.PurchaseOrderService" relativeAddress="PurchaseOrderService.svc" factory="Exercise.StructureMapWcfExtension.StructureMapServiceHostFactory"/> </serviceActivations> </serviceHostingEnvironment> That didn't work either. Please I need help in getting this to work so I can incorporate it to our project. Thank you. UPDATE: Here's the service host factory's code: namespace Exercise.StructureMapWcfExtension { public class StructureMapServiceHostFactory : ServiceHostFactory { private readonly Container Container; public StructureMapServiceHostFactory() { Container = new Container(); new ContainerConfigurer().Configure(Container); } protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { return new StructureMapServiceHost(Container, serviceType, baseAddresses); } } public class ContainerConfigurer { public void Configure(Container container) { container.Configure(r => r.For<IPurchaseOrderFacade>().Use<PurchaseOrderFacade>()); } } }

    Read the article

  • jQuery Validation plugin: checkbox groups and error message issues

    - by boomturn
    I've put together a form using the jQuery Validation plugin, and all inputs are fine with working validation and error messages – except for checkboxes. I have two checkbox problems. The first is that the Validation plugin API doesn't seem to handle checkboxes in grouped contexts (I'm using fieldsets for grouping). Found several approaches to the issue here, including reference to a post by Rebecca Murphey for a more general case using a custom method and class. Adapting that to this situation: jQuery.validator.addMethod('required_group', function(val, el) { var fieldParent = $(el).closest('fieldset'); return fieldParent.find('.required_group:checked').length; }); jQuery.validator.addClassRules('required_group', { 'required_group': true }); jQuery.validator.messages.required_group = 'Please check at least one box.'; This sort of works, but produces error messages on every checkbox, and only removes them as each box is clicked. This is not an acceptable situation for the user, who can only get rid of them by clicking false positives. Ideally, I guess what's needed is something to prevent or eliminate extra messages before they are displayed and use errorPlacement to display a single error message in the parent fieldset, that would then be removed with a click on any checkbox. Less ideally, maybe they would all display but an event handler could turn off the full set of redundant messages with a click, which is what this approach offered by tvanfosson appears to do. (Another customized approach here, but I couldn't get it to work.) I guess I should also note this form requires the checkboxes to have different names. My second problem is that one of the fieldsets with checkboxes in the form also contains a nested fieldset of checkboxes under one of the outer checkboxes. So in addition to the first-level one-box-checked requirement, if the particular checkbox containing the second-level checkboxes is checked, then at least one of the second-level boxes must be checked. Not sure about the right approach; I'm guessing what needs to happen (following the above scheme) is that the trigger checkbox would use toggleClass to add/remove 'required_group' class to all the checkboxes in the subfield, which would then (hopefully) behave the same as the parent field: $("#triggerCheckbox").click(function () { $(this).find(":checkbox").toggleClass("required_group"); }); Any suggestions or ideas welcome. I'm well beyond my limited jQuery skills on this one and would be happy to hear that I missed simple, elegant and/or obvious ways to do this!

    Read the article

  • Algorithm to Render a Horizontal Binary-ish Tree in Text/ASCII form

    - by Justin L.
    It's a pretty normal binary tree, except for the fact that one of the nodes may be empty. I'd like to find a way to output it in a horizontal way (that is, the root node is on the left and expands to the right). I've had some experience expanding trees vertically (root node at the top, expanding downwards), but I'm not sure where to start, in this case. Preferably, it would follow these couple of rules: If a node has only one child, it can be skipped as redundant (an "end node", with no children, is always displayed) All nodes of the same depth must be aligned vertically; all nodes must be to the right of all less-deep nodes and to the left of all deeper nodes. Nodes have a string representation which includes their depth. Each "end node" has its own unique line; that is, the number of lines is the number of end nodes in the tree, and when an end node is on a line, there may be nothing else on that line after that end node. As a consequence of the last rule, the root node should be in either the top left or the bottom left corner; top left is preferred. For example, this is a valid tree, with six end nodes (node is represented by a name, and its depth): [a0]------------[b3]------[c5]------[d8] \ \ \----------[e9] \ \----[f5] \--[g1]--------[h4]------[i6] \ \--------------------[j10] \-[k3] Which represents the horizontal, explicit binary tree: 0 a / \ 1 g * / \ \ 2 * * * / \ \ 3 k * b / / \ 4 h * * / \ \ \ 5 * * f c / \ / \ 6 * i * * / / \ 7 * * * / / \ 8 * * d / / 9 * e / 10 j (branches folded for compactness; * representing redundant, one-child nodes; note that *'s are actual nodes, storing one child each, just with names omitted here for presentation sake) (also, to clarify, I'd like to generate the first, horizontal tree; not this vertical tree) I say language-agnostic because I'm just looking for an algorithm; I say ruby because I'm eventually going to have to implement it in ruby anyway. Assume that each Node data structure stores only its id, a left node, and a right node. A master Tree class keeps tracks of all nodes and has adequate algorithms to find: A node's nth ancestor A node's nth descendant The generation of a node The lowest common ancestor of two given nodes Anyone have any ideas of where I could start? Should I go for the recursive approach? Iterative?

    Read the article

  • DOM memory issue with IE8 (inserting lots of JSON data)

    - by okie.floyd
    i am developing a small web-utility that displays some data from some database tables. i have the utility running fine on FF, Safari, Chrome..., but the memory management on IE8 is horrendous. the largest JSON request I do will return information to create around 5,000 or so rows in a table within the browser (3 columns in the table). i'm using jquery to get the data (via getJSON). to remove the old/existing table, i'm just doing a $('#my_table_tbody').empty(). to add the new info to the table, within the getJSON callback, i am just appending each table row that i am creating to a variable, and then once i have them all, i am using $('#my_table_tbody').append(myVar) to add it to the existing tbody. i don't add the table rows as they are created because that seems to be a lot slower than just adding them all at once. does anyone have any recommendation on what someone should do who is trying to add thousands of rows of data to the DOM? i would like to stay away from pagination, but i'm wondering if i don't have a choice. Update 1 So here is the code I was trying after the innerHTML suggestion: /* Assuming a div called 'main_area' holds the table */ document.getElementById('main_area').innerHTML = ''; $.getJSON("my_server", {my: JSON, args: are, in: here}, function(j) { var mylength = j.length; var k =0; var tmpText = ''; tmpText += /* Add the table, thead stuff, and tbody tags here */; for (k = mylength - 1; k = 0; k--) { /* stack overflow wont let me type greater than & less than signs here, so just assume that they are there. */ tmpText += 'tr class="' + j[k].row_class . '" td class="col1_class" ' + j[k].col1 + ' /td td class="col2_class" ' + j[k].col2 + ' /td td class="col3_class" ' + j[k].col3 + ' /td /tr'; } document.getElementById('main_area').innerHTML = tmpText; } That is the gist of it. I've also tried using just a $.get request, and having the server send the formatted HTML, and just setting that in the innerHTML (i.e. document.getElementById('main_area').innerHTML = j;). thanks for all of the replies. i'm floored with the fact that you all are willing to help.

    Read the article

  • Optimizing spacing of mesh containing a given set of points

    - by Feynman
    I tried to summarize the this as best as possible in the title. I am writing an initial value problem solver in the most general way possible. I start with an arbitrary number of initial values at arbitrary locations (inside a boundary.) The first part of my program creates a mesh/grid (I am not sure which is the correct nuance), with N points total, that contains all the initial values. My goal is to optimize the mesh such that the spacing is as uniform as possible. My solver seems to work half decently (it needs some more obscure debugging that is not relevant here.) I am starting with one dimension. I intend to generalize the algorithm to an arbitrary number of dimensions once I get it working consistently. I am writing my code in fortran, but feel free to reply with pseudocode or the language of your choice. Allow me to elaborate with an example: Say I am working on a closed interval [1,10] xmin=1 xmax=10 Say I have 3 initial points: xmin, 5 and xmax num_ivc=3 known(num_ivc)=[xmin,5,xmax] //my arrays start at 1. Assume "known" starts sorted I store my mesh/grid points in an array called coord. Say I want 10 points total in my mesh/grid. N=10 coord(10) Remember, all this is arbitrary--except the variable names of course. The algorithm should set coord to {1,2,3,4,5,6,7,8,9,10} Now for a less trivial example: num_ivc=3 known(num_ivc)=[xmin,5.5,xmax or just num_ivc=1 known(num_ivc)=[5.5] Now, would you have 5 evenly spaced points on the interval [1, 5.5] and 5 evenly spaced points on the interval (5.5, 10]? But there is more space between 1 and 5.5 than between 5.5 and 10. So would you have 6 points on [1, 5.5] followed by 4 on (5.5 to 10]. The key is to minimize the difference in spacing. I have been working on this for 2 days straight and I can assure you it is a lot trickier than it sounds. I have written code that only works if N is large only works if N is small only works if it the known points are close together only works if it the known points are far apart only works if at least one of the known points is near a boundary only works if none of the known points are near a boundary So as you can see, I have coded the gamut of almost-solutions. I cannot figure out a way to get it to perform equally well in all possible scenarios (that is, create the optimum spacing.)

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • Annoying Captcha >> How to programm a form that can SMELL difference between human and robot?

    - by Sam
    Hi folks. On the comment of my old form needing a CAPTHA, I felt I share my problem, perhaps you recognize it and find its time we had better solutions: FACTUAL PROBLEM I know most of my clients (typical age= 40~60) hate CAPTCHA things. Now, I myself always feel like a robot, when I have to sueeze my eyes and fill in the strange letters from the Capcha... Sometimes I fail! Go back etc. Turnoff. I mean comon its 2011, shouldnt the forms have better A.I. by now? MY NEW IDEA (please dont laugh) Ive thought about it and this is my idea's to tell difference between human and robot: My idea is to give credibility points. 100 points = human 0% = robot. require real human mouse movements require mousemovements that dont follow any mathematical pattern require non-instantaneous reading delays, between load and first input in form when typing in form, delays are measured between letters and words approve as human when typical human behaviour measured (deleting, rephrasing etc) dont allow instant pasting or all fields give points for real keyboard pressures retract points for credibility when hyperlinks in form Test wether fake email field (invisible by human) is populated (suggested by Tomalak) when more than 75% human cretibility, allow to be sent without captcha when less than 25% human crecibility, force captcha puzzle to be sure Could we write a A.I. PHP that replaces the human-annoying capthas, meanwhile stopping most spamservers filing in the data? Not only for the fun of it, but also actually to provide a 99% better alternative than CAPTHCA's. Imagine the userfriendlyness of your forms! Your site distinguishing itself from others, showing your audience your sites KNOWS the difference between a robot and a human. Imagine the advangage. I am trying to capture the essense of that distinguishing edge. PROGRAMMING QUESTION: 1) Are such things possible to programm? 2) If so how would you start such programm? 3) Are there already very good working solutions available elsewhere? 4) If it isn't so hard, your are welcome to share your answer/solutions below. 5) upon completion of hints and new ideas, could this page be the start of a new AI captcha, OR should I forget about it and just go with the flow, forget about the whole AI dream, and use captcha like everyone else.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Algorithm to retrieve every possible combination of sublists of a two lists

    - by sgmoore
    Suppose I have two lists, how do I iterate through every possible combination of every sublist, such that each item appears once and only once. I guess an example could be if you have employees and jobs and you want split them into teams, where each employee can only be in one team and each job can only be in one team. Eg List<string> employees = new List<string>() { "Adam", "Bob"} ; List<string> jobs = new List<string>() { "1", "2", "3"}; I want Adam : 1 Bob : 2 , 3 Adam : 1 , 2 Bob : 3 Adam : 1 , 3 Bob : 2 Adam : 2 Bob : 1 , 3 Adam : 2 , 3 Bob : 1 Adam : 3 Bob : 1 , 2 Adam, Bob : 1, 2, 3 I tried using the answer to this stackoverflow question to generate a list of every possible combination of employees and every possible combination of jobs and then select one item from each from each list, but that's about as far as I got. I don't know the maximum size of the lists, but it would be certainly be less than 100 and there may be other limiting factors (such as each team can have no more than 5 employees) Update Not sure whether this can be tidied up more and/or simplified, but this is what I have ended up with so far. It uses the Group algorithm supplied by Yorye (see his answer below), but I removed the orderby which I don't need and caused problems if the keys are not comparable. var employees = new List<string>() { "Adam", "Bob" } ; var jobs = new List<string>() { "1", "2", "3" }; int c= 0; foreach (int noOfTeams in Enumerable.Range(1, employees.Count)) { var hs = new HashSet<string>(); foreach( var grouping in Group(Enumerable.Range(1, noOfTeams).ToList(), employees)) { // Generate a unique key for each group to detect duplicates. var key = string.Join(":" , grouping.Select(sub => string.Join(",", sub))); if (!hs.Add(key)) continue; List<List<string>> teams = (from r in grouping select r.ToList()).ToList(); foreach (var group in Group(teams, jobs)) { foreach (var sub in group) { Console.WriteLine(String.Join(", " , sub.Key ) + " : " + string.Join(", ", sub)); } Console.WriteLine(); c++; } } } Console.WriteLine(String.Format("{0:n0} combinations for {1} employees and {2} jobs" , c , employees.Count, jobs.Count)); Since I'm not worried about the order of the results, this seems to give me what I need.

    Read the article

  • html horizontal scrolling

    - by mp
    Hi, i have a simple css example, and i can't understand the behavior of one of my divs, when the horizontal scroll is displayed. so...when my browser window needs to display the horizontal scroll(when the window width is less than my div "content" width(1024px)), my div "footer" (that have the same "content's" parent and 100% width), seems to get an "extra blank space" on the right side. this space grows when I reduce the width of the window. any ideas about how can i get it off, or why it happens??thanks! heres my code: css: <style type="text/css"> html, body { height: 100%; width:100%; font-family:"Arial Black", Gadget, sans-serif; font-size:11px; font-variant:normal; } * { margin: 0; } .wrapper { min-height: 100%; height: auto !important; height: 100%; margin: 0 auto -4em; } #content{ width:1024px; margin:0px auto; background-color:#990; height:780px; } .footer, .push { height: 4em; width:100%; } #footer-content{ height:10px; background-color:#09F; width:100%; } </style> html: <body> <div class="wrapper"> <div id="content"> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam scelerisque varius tortor vitae pretium. Quisque magna ipsum, accumsan sit amet pretium sed, iaculis feugiat nibh. Donec vitae dui eros, eu ultricies nulla. Morbi aliquet, nisi in tincidunt rutrum, nisl justo sagittis nisi, nec dignissim orci elit vitae tortor. </p> </div> <div class="push"></div> </div> <div class="footer" style="background-color:#900; width:100%;"> <div id="footer-content"></div> </div> </body>

    Read the article

  • How hard programming is? Really. [closed]

    - by Bubba88
    Hi! The question is about your perception of programming activity. How hard/exacting this task is? There is much buzz about programming nowadays, people say that programmers are smart, very technical and abstract at a time, know much about world, psychology etc.. They say, that programmers got really powerful brain thing, cause there is much to keep in consideration simultaneously again with much information folded into each other associatively (up 10 levels of folding they say))) Still, there are some terms to specify at our own.. So that is the question: What do you think about programming in general? Is it hard? Is it 'for everyone' or for the particular kind of people only? How much non-CS background do you need to program (just to program, really; enterprise applications for example)? How long is the learning curve? (again, for programming in general) And another bunch of random questions: - If you were not to like/love programming, would that be a serious trouble bothering your current employment? - If you were to start from the beginning, would you chose that direction this time? - What other areas (jobs or maybe hobbies) are comparable to programming in the way they can explode someone's lovely brain? - Is 'non turing-complete programming' (SQL, XML, etc.) comparable to what we do or is it really way easier, less requiring, cheap and akin to cooking :)? Well, the essence is: How would you describe programming activity WRT to its difficulty? Or, on the other hand: Did you ever catch yourself thinking at some point: OMG, it's sooo hard! I don't know how would I ever program, even carried away this way and doing programming just for fun? It's very interesting to know your opinion, your'e the programmers after all. I mean much people must be exaggerating/speculating about the thing they do not really know about. But that musn't be the case here on SO :) P.S.: I'll try my best to update this post later, and you please edit it too. At least I'll get decent English in my question text :)

    Read the article

  • bin_at in dlmalloc

    - by chunhui
    In glibc malloc.c or dlmalloc It said "repositioning tricks"As in blew, and use this trick in bin_at. bins is a array,the space is allocated when av(struct malloc_state) is allocated.doesn't it? the sizeof(bin[i]) is less then sizeof(struct malloc_chunk*)? Who can describe this trick for me? I can't understand the bin_at macro.why they get the bins address use this method?how it works? Very thanks,and sorry for my poor English. /* To simplify use in double-linked lists, each bin header acts as a malloc_chunk. This avoids special-casing for headers. But to conserve space and improve locality, we allocate only the fd/bk pointers of bins, and then use repositioning tricks to treat these as the fields of a malloc_chunk*. */ typedef struct malloc_chunk* mbinptr; /* addressing -- note that bin_at(0) does not exist */ #define bin_at(m, i) \ (mbinptr) (((char *) &((m)->bins[((i) - 1) * 2])) \ - offsetof (struct malloc_chunk, fd)) The malloc_chunk struct like this: struct malloc_chunk { INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ struct malloc_chunk* fd; /* double links -- used only if free. */ struct malloc_chunk* bk; /* Only used for large blocks: pointer to next larger size. */ struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */ struct malloc_chunk* bk_nextsize; }; And the bin type like this: typedef struct malloc_chunk* mbinptr; struct malloc_state { /* Serialize access. */ mutex_t mutex; /* Flags (formerly in max_fast). */ int flags; #if THREAD_STATS /* Statistics for locking. Only used if THREAD_STATS is defined. */ long stat_lock_direct, stat_lock_loop, stat_lock_wait; #endif /* Fastbins */ mfastbinptr fastbinsY[NFASTBINS]; /* Base of the topmost chunk -- not otherwise kept in a bin */ mchunkptr top; /* The remainder from the most recent split of a small request */ mchunkptr last_remainder; /* Normal bins packed as described above */ mchunkptr bins[NBINS * 2 - 2]; /* Bitmap of bins */ unsigned int binmap[BINMAPSIZE]; /* Linked list */ struct malloc_state *next; #ifdef PER_THREAD /* Linked list for free arenas. */ struct malloc_state *next_free; #endif /* Memory allocated from the system in this arena. */ INTERNAL_SIZE_T system_mem; INTERNAL_SIZE_T max_system_mem; };

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • select option in Safari ?

    - by Karandeep Singh
    < select size="2" multiple> < option value="1">1< /option> < option value="2">2< /option> < option value="3">3< /option> < option value="4">4< /option> < option value="5">5< /option> < option value="6">6< /option> < option value="7">7< /option> < option value="8">8< /option> < option value="9">9< /option> < option value="10">10< /option> < option value="11">11< /option> < option value="12">12< /option> < option value="13">13< /option> < /select> size attribute of Select tag is not working properly in Safari. if size attribute's value is greater than four then there have no problem, its working. But when I set the value of size less than four then it show four values. above example displays four options in safari but I want two options. What is the reason for this ?

    Read the article

  • CakePHP HABTM: Editing one item casuses HABTM row to get recreated, destroys extra data

    - by leo-the-manic
    I'm having trouble with my HABTM relationship in CakePHP. I have two models like so: Department HABTM Location. One large company has many buildings, and each building provides a limited number of services. Each building also has its own webpage, so in addition to the HABTM relationship itself, each HABTM row also has a url field where the user can visit to find additional information about the service they're interested and how it operates at the building they're interested in. I've set up the models like so: <?php class Location extends AppModel { var $name = 'Location'; var $hasAndBelongsToMany = array( 'Department' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class Department extends AppModel { var $name = 'Department'; var $hasAndBelongsToMany = array( 'Location' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class DepartmentsLocation extends AppModel { var $name = 'DepartmentsLocation'; var $belongsTo = array( 'Department', 'Location' ); // I'm pretty sure this method is unrelated. It's not being called when this error // occurs. Its purpose is to prevent having two HABTM rows with the same location // and department. function beforeSave() { // kill any existing rows with same associations $this->log(__FILE__ . ": killing existing HABTM rows", LOG_DEBUG); $result = $this->find('all', array("conditions" => array("location_id" => $this->data['DepartmentsLocation']['location_id'], "department_id" => $this->data['DepartmentsLocation']['department_id']))); foreach($result as $row) { $this->delete($row['DepartmentsLocation']['id']); } return true; } } ?> The controllers are completely uninteresting. The problem: If I edit the name of a Location, all of the DepartmentsLocations that were linked to that Location are re-created with empty URLs. Since the models specify that unique is true, this also causes all of the newer rows to overwrite the older rows, which essentially destroys all of the URLs. I would like to know two things: Can I stop this? If so, how? And, on a less technical and more whiney note: Why does this even happen? It seems bizarre to me that editing a field through Cake should cause so much trouble, when I can easily go through phpMyAdmin, edit the Location name there, and get exactly the result I would expect. Why does CakePHP touch the HABTM data when I'm just editing a field on a row? It's not even a foreign key!

    Read the article

  • Exception in thread "main" java.lang.StackOverflowError

    - by Ray.R.Chua
    I have a piece of code and I could not figure out why it is giving me Exception in thread "main" java.lang.StackOverflowError. This is the question: Given a positive integer n, prints out the sum of the lengths of the Syracuse sequence starting in the range of 1 to n inclusive. So, for example, the call: lengths(3) will return the the combined length of the sequences: 1 2 1 3 10 5 16 8 4 2 1 which is the value: 11. lengths must throw an IllegalArgumentException if its input value is less than one. My Code: import java.util.HashMap; public class Test { HashMap<Integer,Integer> syraSumHashTable = new HashMap<Integer,Integer>(); public Test(){ } public int lengths(int n)throws IllegalArgumentException{ int sum =0; if(n < 1){ throw new IllegalArgumentException("Error!! Invalid Input!"); } else{ for(int i =1; i<=n;i++){ if(syraSumHashTable.get(i)==null) { syraSumHashTable.put(i, printSyra(i,1)); sum += (Integer)syraSumHashTable.get(i); } else{ sum += (Integer)syraSumHashTable.get(i); } } return sum; } } private int printSyra(int num, int count){ int n = num; if(n == 1){ return count; } else{ if(n%2==0){ return printSyra(n/2, ++count); } else{ return printSyra((n*3)+1, ++count) ; } } } } Driver code: public static void main(String[] args) { // TODO Auto-generated method stub Test s1 = new Test(); System.out.println(s1.lengths(90090249)); //System.out.println(s1.lengths(5)); } . I know the problem lies with the recursion. The error does not occur if the input is a small value, example: 5. But when the number is huge, like 90090249, I got the Exception in thread "main" java.lang.StackOverflowError. Thanks all for your help. :) I almost forgot the error msg: Exception in thread "main" java.lang.StackOverflowError at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60)

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • DataTable Filter mystery

    - by user283897
    Hi, could you please help me find the reason of the mystery I've found? In the below code, I create a DataTable and filter it. When I use filter1, everything works as expected. When I use filter2, everything works as expected only if the SubsectionAmount variable is less than 10. As soon as I set SubsectionAmount=10, the dr2 array returns Nothing. I can't find what is wrong. Here is the code: Imports System.Data Partial Class FilterTest Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Call FilterTable() End Sub Sub FilterTable() Dim dtSubsections As New DataTable Dim SectionID As Integer, SubsectionID As Integer Dim SubsectionAmount As Integer Dim filter1 As String, filter2 As String Dim rowID As Integer Dim dr1() As DataRow, dr2() As DataRow With dtSubsections .Columns.Add("Section") .Columns.Add("Subsection") .Columns.Add("FieldString") SectionID = 1 SubsectionAmount = 10 '9 For SubsectionID = 1 To SubsectionAmount .Rows.Add(SectionID, SubsectionID, "abcd" & CStr(SubsectionID)) Next SubsectionID For rowID = 0 To .Rows.Count - 1 Response.Write(.Rows(rowID).Item(0).ToString & " " _ & .Rows(rowID).Item(1).ToString & " " _ & .Rows(rowID).Item(2).ToString & "<BR>") Next SubsectionID = 1 filter1 = "Section=" & SectionID & " AND " & "Subsection=" & SubsectionID filter2 = "Section=" & SectionID & " AND " & "Subsection=" & SubsectionID + 1 dr1 = .Select(filter1) dr2 = .Select(filter2) Response.Write(dr1.Length & "<BR>") Response.Write(dr2.Length & "<BR>") If dr1.Length > 0 Then Response.Write(dr1(0).Item("FieldString").ToString & "<BR>") End If If dr2.Length > 0 Then Response.Write(dr2(0).Item("FieldString").ToString & "<BR>") End If End With End Sub End Class

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • How to find relation between change in latitudes at centre of map and top/bottom

    - by Imran
    Hi, Im having little trouble finding a relation between the movement at centre and edge of a circle, Im doing for panning world map,my map extent is 180,89:-180,-89, my map pans by adding change(dx,dY) to its extents and not its centre. Now a situation has arrrised where I have to move the map to a specific centre, to calculate the change in longitudes is very easy and simple, but its the change in lattitudes that has caused problem. It seems the change in centreY of map is more than the change at edge of the mapY, or simply if I have to move the map centre from 0long,0lat to 73long,33lat, for dX I simply get 73, but for dY apparently it looks 33 but if i add 33 to top of map that is 89 , it will be 122 which is incorrect since Latitudes are between 90 and -90 . It seems a case a projection of a circle on 2D plane where the edge of circle since is moving backward due to angle expereinces less change and the centre expereinces more change, now is there a relation between these two factors? I tried converting the difference between OriginY and destinationY into radians and then add to Top and Bottom of Map, but it did'nt really work for me. Please note that the map is project on a virtual canvas whose width starts from 256 and increases by 256*2^z , z=0 is default and whole world is visible at that extent of canvas code: public void moveMapTo(double destinationLongitude,double destinationLattitude) // moves map to the new centre { double dXLong=destinationLongitude-centreLongitude; double atanhsinO = atanh(Math.sin(destinationLattitude * Math.PI / 180.00)); double atanhsinD = atanh(Math.sin(centreLatitude * Math.PI / 180.00)); double atanhCentre = (atanhsinD + atanhsinO) / 2; double latitudeSpan =destinationLattitude - centreLatitude; double radianOfCentreLatitude = Math.atan(Math.sinh(atanhCentre)); double dXLat=latitudeSpan / Math.cos(radianOfCentreLatitude); dXLat*=getLattitudeSpan()*(Math.PI/180); <--- HERE IS THE PORBLEM System.out.println("dxLong:"+dXLong+"_dxLat:"+dXLat); mapLeft+=dXLong; mapRight+=dXLong; mapTop+=dXLat; mapBottom+=dXLat; } ////latitude span function private double getLattitudeSpan() { double latitudeSpan = mapTop - mapBottom; latitudeSpan = latitudeSpan / Math.cos(radianOfCentreLatitude); return Math.abs(latitudeSpan); } //ht

    Read the article

  • Nice, clean, simple way of getting a dataset from ASP.NET to plain HTML jQuery or JavaScript library

    - by David S
    I know this is a probable open ended question, and I have tried looking around so much over the last year or two... maybe I am looking for a perfect place that doesn't exist! of course it's all about perception no less.. Anyway, just to clarify what I am trying to do and why: I want to be able to use (primarily for the moment) ASP.NET or services thereof to get a dataset - whatever the source data, I can obviously get a dataset of rows/Columns. I want to be able to, as simply as possible, get that data over to the client via xml/json/whatever, to then use in a "variety" of ways. "Variety" of ways meaning I would like to "easily" bind that data to say a grid, or a combo dropdown or just simply render to a textbox - BUT by referencing the dataset as I would say on the serverside. Now I know this all sounds simplistic, and I know there are lots of complications.. so I have tried the following so far over the last year or so: ExtJS - very good, nice solid framework, but just found it a bit too much to use in everyday basic apps - great if I was building a whole application with it Yahoo YUI - not looked recently, but I guess some of the concepts with ExtJS were similar? JQuery - of course to get data etc, it was ok, and I guess there are so many 3rd party plugins, that a mix and match might work? Adobe SPRY - ironically this was as close to getting a dataset style structure to Javascript/client, although it seemed to drop off/go quiet..? I maybe wrong.. I did have a very cursory play with Tibco GI and another one I cannot remember the name of! but again, it felt like it was great to build a whole app perhaps? Anyway, I am very amazed by all of the technologies coming out, and really not biased one way or the other, I really just want a very simple way of getting data from the server, and having a basic/very flexible way of working with that data in the client without using server technologies.. I need to keep the server flexible as I may need to use PHP, or java technologies not just .NET So again, sorry for the rambles, but if anyone out there has had a simple experience, or would like to share some ideas, it would be very welcomed!! David.

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >