Search Results

Search found 4140 results on 166 pages for 'alias analysis'.

Page 146/166 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • advanced winform framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • Is there any algorithm that can solve ANY traditional sudoku puzzles, WITHOUT guessing (or similar techniques)?

    - by justin
    Is there any algorithm that solves ANY traditional sudoku puzzle, WITHOUT guessing? Here Guessing means trying an candidate and see how far it goes, if a contradiction is found with the guess, backtracking to the guessing step and try another candidate; when all candidates are exhausted without success, backtracking to the previous guessing step (if there is one; otherwise the puzzle proofs invalid.), etc. EDIT1: Thank you for your replies. traditional sudoku means 81-box sudoku, without any other constraints. Let us say the we know the solution is unique, is there any algorithm that can GUARANTEE to solve it without backtracking? Backtracking is a universal tool, I have nothing wrong with it but, using a universal tool to solve sudoku decreases the value and fun in deciphering (manually, or by computer) sudoku puzzles. How can a human being solve the so called "the hardest sudoku in the world", does he need to guess? I heard some researcher accidentally found that their algorithm for some data analysis can solve all sudoku. Is that true, do they have to guess too?

    Read the article

  • How do I ensure that a regex does not match an empty string?

    - by Dancrumb
    I'm using the Jison parser generator for Javascript and am having problems with my language specification. The program I'm writing will be a calculator that can handle feet, inches and sixteenths. In order to do this, I have the following specification: %% ([0-9]+\s*"'")?\s*([0-9]+\s*"\"")?\s*([0-9]+\s*"s")? {return 'FIS';} [0-9]+("."[0-9]+)?\b {return 'NUMBER';} \s+ {/* skip whitespace */} "*" {return '*';} "/" {return '/';} "-" {return '-';} "+" {return '+';} "(" {return '(';} ")" {return ')';} <<EOF>> {return 'EOF';} Most of these lines come from a basic calculator specification. I simply added the first line. The regex correctly matches feet, inch, sixteenths, such as 6'4" (six feet, 4 inches) or 4"5s (4 inches, 5 sixteenths) with any kind of whitespace between the numbers and indicators. The problem is that the regex also matches a null string. As a result, the lexical analysis always records a FIS at the start of the line and then the parsing fails. Here is my question: is there a way to modify this regex to guarantee that it will only match a non-zero length string?

    Read the article

  • How do I set the Eclipse build path and class path from an Ant build file?

    - by Nels Beckman
    Hey folks, There's a lot of discussion about Ant and Eclipse, but no previously answered seems to help me. Here's the deal: I am trying to build a Java program that compiles successfully with Ant from the command-line. (To confuse matters further, the program I am attempting to compile is Ant itself.) What I really want to do is to bring this project into Eclipse and have it compile in Eclipse such that the type bindings and variable bindings (nomenclature from Eclipse JDT) are correctly resolved. I need this because I need to run a static analysis on the code that is built on top of Eclipse JDT. The normal way I bring a Java project into Eclipse so that Eclipse will build it and resolve all the bindings is to just import the source directories into a Java project, and then tell it to use the src/main/ directory as a "source directory." Unfortunately, doing that with Ant causes the build to fail with numerous compile errors. It seems to me that the Ant build file is setting up the class path and build path correctly (possibly by excluding certain source files) and Eclipse does not have this information. Is there any way to take the class path & build path information embedded in an Ant build file, and given that information to Eclipse to put in its .project and .classpath files? I've tried, creating a new project from an existing build file (an option in the File menu) but this does not help. The project still has the same compile errors. Thanks, Nels

    Read the article

  • The mathematics of Schellings segregation model

    - by Bruce
    For those who don't know the model. You can read this pdf. I want to find what is the probability that 2 nodes are each others neighbors when the algorithm converges (i.e. when all nodes are happy). Here's the model in a gist. You have a grid (say 10x10). You have nodes of two kind (red and green) 45 each. So we have 10 empty spaces. We randomly place the nodes on the grid. Now we scan through this grid (Exact order does not matter according to Schelling). Each node wants a specific percentage of people of same kind in its Moore neighborhood (say b = 50% for each red and green). We calculate the happiness of each node (a = Number of neighbors of same kind/Number of neighbors of different kind). If a node is unhappy (a < b) it moves to an empty cell where it knows it will be happy. This movement can change the dynamics of old as well as new neighborhood. Algorithm converges when all nodes are happy. PS - I am looking for links for any mathematical analysis of the Schelling's model.

    Read the article

  • Shortest acyclic path on directed cyclic graph with negative weights/cycles

    - by Janathan
    I have a directed graph which has cycles. All edges are weighted, and the weights can be negative. There can be negative cycles. I want to find a path from s to t, which minimizes the total weight on the path. Sure, it can go to negative infinity when negative cycles exist. But what if I disallow cycles in the path (not in the original graph)? That is, once the path leaves a node, it can not enter the node again. This surely avoids the negative infinity problem, but surprisingly no known algorithm is found by a search on Google. The closest is Floyd–Warshall algorithm, but it does not allow negative cycles. Thanks a lot in advance. Edit: I may have generalized my original problem too much. Indeed, I am given a cyclic directed graph with nonnegative edge weights. But in addition, each node has a positive reward too. I want to find a simple path which minimizes (sum of edge weights on the path) - (sum of node rewards covered by the path). This can be surely converted to the question that I posted, but some structure is lost. And some hint from submodular analysis suggests this motivating problem is not NP-hard. Thanks a lot

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Cannot call DLL import entry in C# from C++ project. EntryPointNotFoundException

    - by kriau
    I'm trying to call from C# a function in a custom DLL written in C++. However I'm getting the warning during code analysis and the error at runtime: Warning: CA1400 : Microsoft.Interoperability : Correct the declaration of 'SafeNativeMethods.SetHook()' so that it correctly points to an existing entry point in 'wi.dll'. The unmanaged entry point name currently linked to is SetHook. Error: System.EntryPointNotFoundException was unhandled. Unable to find an entry point named 'SetHook' in DLL 'wi.dll'. Both projects wi.dll and C# exe has been compiled in to the same DEBUG folder, both files reside here. There is only one file with the name wi.dll in the whole file system. C++ function definition looks like: #define WI_API __declspec(dllexport) bool WI_API SetHook(); I can see exported function using Dependency Walker: as decorated: bool SetHook(void) as undecorated: ?SetHook@@YA_NXZ C# DLL import looks like (I've defined these lines using CLRInsideOut from MSDN magazine): [DllImport("wi.dll", EntryPoint = "SetHook", CallingConvention = CallingConvention.Cdecl)] [return: MarshalAsAttribute(UnmanagedType.I1)] internal static extern bool SetHook(); I've tried without EntryPoint and CallingConvention definitions as well. Both projects are 32-bits, I'm using W7 64 bits, VS 2010 RC. I believe that I simply have overlooked something.... Thanks in advance.

    Read the article

  • Why can't RB-Tree be a list?

    - by Alex
    Hey everyone. I have a problem with the rb-trees. according to wikipedia, rb-tree needs to follow the following: A node is either red or black. The root is black. (This rule is used in some definitions and not others. Since the root can always be changed from red to black but not necessarily vice-versa this rule has little effect on analysis.) All leaves are black. Both children of every red node are black. Every simple path from a given node to any of its descendant leaves contains the same number of black nodes. As we know, an rb-tree needs to be balanced and has the height of O(log(n)). But, if we insert an increasing series of numbers (1,2,3,4,5...) and theoretically we will get a tree that will look like a list and will have the height of O(n) with all its nodes black, which doesn't contradict the rb-tree properties mentioned above. So, where am I wrong?? thanks.

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • Constructor being called again?

    - by Halo
    I have this constructor; public UmlDiagramEntity(ReportElement reportElement, int pageIndex, Controller controller) { super(reportElement.getX1(), reportElement.getY1(), reportElement.getX2(), reportElement.getY2()); setLayout(null); this.pageIndex = pageIndex; this.controller = controller; reportElements = reportElement.getInternalReportElements(); components = new ArrayList<AbstractEntity>(); changedComponentIndex = -1; PageListener p = new PageListener(); this.addMouseMotionListener(p); this.addMouseListener(p); setPage(); } And I have an update method in the same class; @Override public void update(ReportElement reportElement) { if (changedComponentIndex == -1) { super.update(reportElement); } else { reportElements = reportElement.getInternalReportElements(); if (components.size() == reportElements.size()) { if (!isCommitted) { if (reportElement.getType() == ReportElementType.UmlRelation) { if (checkInvolvementAndSet(changedComponentIndex)) { anchorEntity(changedComponentIndex); } else { resistChanges(changedComponentIndex); } return; } } ..................goes on When I follow the flow from the debugger, I see that when update is called, somewhere in the method, the program goes into the constructor and executes it all over again (super, pageIndex, etc.). Why does it go to the constructor :D I didn't tell it to go there. I can make a deeper analysis and see where it goes to the constructor if you want. By the way, changedComponentIndex is a static variable.

    Read the article

  • Is there a way to force ContourPlot re-check all the points on the each stage of it's recursion algorithm?

    - by Alexey Popkov
    Hello, Thanks to this excellent analysis of the Plot algorithm by Yaroslav Bulatov, I now understand the reason why Plot3D and ContourPlot fail to draw smoothly functions with breaks and discontinuities. For example, in the following case ContourPlot fails to draw contour x^2 + y^2 = 1 at all: ContourPlot[Abs[x^2 + y^2 - 1], {x, -1, 1}, {y, -1, 1}, Contours -> {0}] It is because the algorithm does not go deeply into the region near x^2 + y^2 = 1. It "drops" this region on an initial stage and do not tries to investigate it further. Increasing MaxRecursion does nothing in this sense. And even undocumented option Method -> {Refinement -> {ControlValue -> .01 \[Degree]}} does not help (but makes Plot3D a little bit smoother). The above function is just a simple example. In real life I'm working with very complicated implicit functions that cannot be solved analytically. Is there a way to get ContourPlot to go deeply into such regions near breaks and discontinuities?

    Read the article

  • How do I generate reports in R?

    - by Maiasaura
    I've been struggling for a week now trying to figure out how to generate reports in R using either Sweave or Brew. I should say right at the beginning that I have never used Tex before but I understand the logic of it. I have read this document several times. However, I cannot even get a simple example to parse. Brew successfully converts a simple markup file (just a title and some text) to a .tex file (no error). But it never ever converts tex to a pdf. > library(tools) > library(brew) > brew("population.brew", "population.tex") > texi2dvi("population.tex", pdf = TRUE) The last step always fails with: Error in texi2dvi("population.tex", pdf = TRUE) : Running 'texi2dvi' on 'population.tex' failed. What am I doing wrong? The report I am trying to build is fairly simple. I have 157 different analysis to summarize. Each one has 4 plots, 1 table and a summary. I just want output plot 1,2,3,4 output table \pagebreak ... that's it. Can anyone help me get further? I use osx, don't have Tex installed. thanks

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • How do I use Spreadsheet::WriteExcel to create a chart from numeric log data?

    - by yaohung
    I used csv2xls.pl to convert a text log into .xls format, and then I create a chart as in the following: my $chart3 = $workbook->add_chart( type => 'line' , embedded => 1); # Configure the series. $chart3->add_series( categories => '=Sheet1!$B$2:$B$64', values => '=Sheet1!$C$2:$C$64', name => 'Test data series 1', ); # Add some labels. $chart3->set_title( name => 'Bridge Rate Analysis' ); $chart3->set_x_axis( name => 'Packet Size ' ); $chart3->set_y_axis( name => 'BVI Rate' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'G2', $chart3 ); I can see the chart in the .xls file. However, all the data are in text format, not numeric, so the chart looks wrong. How do I convert text into number before applying this create-chart function? Also, how do I sort the .xls file before creating the chart?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • problem with implementing a simple work queue

    - by John Deerikio
    Hi all, I am having troubles with implementing a simple work queue. Doing some analysis, I am facing a subtle problem. The work queue is backed by a regular linked list. The code looks like this (simplified): 0. while (true) 1. while (enabled == true) 2. acquire lock on the list and get the next action to be executed (blocking operation) (store it in a local variable) 3. execute the action (outside the lock on the list on previous line) 4. get lock on this work queue 5. wait until this work queue has been notified (triggered when setEnabled(true) has been callled) The setEnabled(e) operation looks like this (simplified): enabled = e if (enabled == true) acquire lock on this work queue and do notify() Although this works, there is a condition in which a deadlock occurs. It happens in the following rare situation: while an action is being executed, setEnabled(false) is called just before step (4) is entered, setEnabled(true) is called now step (5) keeps waiting forever, because this work queue has already been notified How do I solve this? I have been looking at this for some time, but I cannot come up with a solution. Please note I am fairly new to thread synchronization. Thanks a lot.

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Accurately and securely measure the time spent viewing a web page

    - by balpha
    Suppose the following: You have a web page that presents a simple game to a user (e.g. a quiz, a puzzle, etc). The user solves the puzzle, submits the result, and you want to measure as precisely as possible how long they took to solve it. Assume it's quite simple, so we're talking seconds, not hours. Also assume JavaScript is required anyway, so there's no need to think of JS-disabled browsers. Finally, assume we don't want to use anything like Flash, Silverlight, or the like. I can think of several techniques: Simply take the time between the points when the data was sent from the server and when the submission arrives. Since this is exclusively server-side, there's no chance for cheating. However, issues like network latency and page rendering time might make this unfair for users with slow computers / browsers / internet connections. On the first request, just send the page without the actual game data. When everything is loaded so far, retrieve the game data through an AJAX call and populate it into the page. This is similar to 1., but reduces some of the caveats introduced through time spent on overhead. Have the time measured on the client side using JavaScript and submitted alongside with the solution. This would theoretically be the most accurate, but it introduces the possibility of cheating, because you're relying on client data. Use the request time headers of a "ready to play" AJAX call and the result submission request. Same caveat as 3., as it is still client data. A combination of server side and client side measuring with some kind of plausibility analysis. I can't think of a good way, but maybe you can. Thoughts? Other ideas?

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >