Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 258/300 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • html body inside if condition

    - by gautam
    Hi, i have two different bodies for different conditions. can i do like this.... <% if(yes) %> <body onload="JavaScript:timedRefresh(120000);"> <% } else { %> <body> <% } %> Please help me... Thanks in advance.. I want to compare my title in if condition how can i do??? currently i am doing like this: <head> <title> <tiles:getAsString name="title"/> </title> </head> <% if(%><tiles:getAsString name="title"/><%.equals("Trade Confirmation Analysis Screen")) %> <body onload="JavaScript:timedRefresh(120000);"> <%else { %> <body> <% } %> Its giving me error..Unable to compile class for JSP:

    Read the article

  • Which to use, XMP or RDF?

    - by zotty
    What's the difference between RDF and XMP? From what I can tell, XMP is derived from RDF... so what does it offer that RDF doesn't? My particular situation is this: I've got some images which need tagging with details of how an experiment was performed, and what sort of data analysis has been performed on the images. A colleague of mine is pushing for XMP, but he's thinking of the images as photos - they're not really, they're just bits of data. From what I've seen (mainly by opening images in notepad++) the XMP data looks very similar to RDF - even so far as using RDF in the tag names (e.g. <rdf:Seq>). I'd like this data to be usable by other people who use similar instruments for similar experiments, so creating a mini standard (schema?) seems like the way to go. Apologies for the lack of fundemental understanding - I'm a Doctor, not a programmer! If it makes any difference, the language of choice will be C#. Edit for more information: First off, thanks for the excellent replies - thinking of XMP as a vocabulary for RDF makes things a lot clearer. The sort of data I'll be storing wont be avaliable in any of the pre-defined sets. It'll detail experimental set ups, locations and results. I think using RDF is the way to go.

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • How to classify NN/NNP/NNS obtained from POS tagged document as a product feature

    - by Shweta .......
    I'm planning to perform sentiment analysis on reviews of product features (collected from Amazon dataset). I have extracted review text from the dataset and performed POS tagging on that. I'm able to extract NN/NNP as well. But my doubt is how do I come to know that extracted words classify as features of the products? I know there are classifiers in nltk but I don't know how I should use it for my project. I'm assuming there are 2 ways of finding whether the extracted word is a product feature or not. One is to compare with a bag of words and find out if my word exists in that. Doubt: How do I create/get bag of words? Second way is to implement some kind of apriori algorithm to find out frequently occurring words as features. I would like to know which method is good and how to go about implementing it. Some pointers to available softwares or code snippets would be helpful! Thanks!

    Read the article

  • How do I set the Eclipse build path and class path from an Ant build file?

    - by Nels Beckman
    Hey folks, There's a lot of discussion about Ant and Eclipse, but no previously answered seems to help me. Here's the deal: I am trying to build a Java program that compiles successfully with Ant from the command-line. (To confuse matters further, the program I am attempting to compile is Ant itself.) What I really want to do is to bring this project into Eclipse and have it compile in Eclipse such that the type bindings and variable bindings (nomenclature from Eclipse JDT) are correctly resolved. I need this because I need to run a static analysis on the code that is built on top of Eclipse JDT. The normal way I bring a Java project into Eclipse so that Eclipse will build it and resolve all the bindings is to just import the source directories into a Java project, and then tell it to use the src/main/ directory as a "source directory." Unfortunately, doing that with Ant causes the build to fail with numerous compile errors. It seems to me that the Ant build file is setting up the class path and build path correctly (possibly by excluding certain source files) and Eclipse does not have this information. Is there any way to take the class path & build path information embedded in an Ant build file, and given that information to Eclipse to put in its .project and .classpath files? I've tried, creating a new project from an existing build file (an option in the File menu) but this does not help. The project still has the same compile errors. Thanks, Nels

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • C# Late Binding for Parameterized Property

    - by optim
    I'm trying to use late binding to connect to a COM automation API provided by a program called Amibroker, using a C# WinForms project. So far I've been able to connect to everything in the API except one item, which I believe to be a "parameterized property" based on extensive Googling. Here's what the API specification looks like according to the docs (Full version here: http://www.amibroker.com/guide/objects.html): Property Filter(ByVal nType As Integer, ByVal pszCategory As String) As Long [r/w] A javascript snippet to update the value looks like this: AB = new ActiveXObject("Broker.Application"); AA = AB.Analysis; AA.Filter( 0, "market" ) = 0; Using the following C# late-binding code, I can get the value of the property, but I can't for the life of me figure out how to set the value: object[] parameter = new object[2]; parameter[0] = Number; parameter[1] = Type; object filters = _analysis.GetType().InvokeMember("Filter", BindingFlags.GetProperty, null, _analysis, parameter); So far I have tried: using BindingFlags.SetProperty, BindingFlags.SetField casting the returned object to a PropertyInfo object and trying to update the value using it adding extra object containing the value to the parameters object various other things as last-ditch efforts From what I can see, this should be straight-forward, but I'm finding the late binding in C# to be cumbersome at best. The property looks like a method call to me, which is what is throwing me off. How does one assign a value to a method, and what would the prototype for late-binding C# code look like for it? Hopefully that explains it well enough, but feel free to ask if I've left anything unclear. Thanks in advance for any help! Daniel

    Read the article

  • How do I ensure that a regex does not match an empty string?

    - by Dancrumb
    I'm using the Jison parser generator for Javascript and am having problems with my language specification. The program I'm writing will be a calculator that can handle feet, inches and sixteenths. In order to do this, I have the following specification: %% ([0-9]+\s*"'")?\s*([0-9]+\s*"\"")?\s*([0-9]+\s*"s")? {return 'FIS';} [0-9]+("."[0-9]+)?\b {return 'NUMBER';} \s+ {/* skip whitespace */} "*" {return '*';} "/" {return '/';} "-" {return '-';} "+" {return '+';} "(" {return '(';} ")" {return ')';} <<EOF>> {return 'EOF';} Most of these lines come from a basic calculator specification. I simply added the first line. The regex correctly matches feet, inch, sixteenths, such as 6'4" (six feet, 4 inches) or 4"5s (4 inches, 5 sixteenths) with any kind of whitespace between the numbers and indicators. The problem is that the regex also matches a null string. As a result, the lexical analysis always records a FIS at the start of the line and then the parsing fails. Here is my question: is there a way to modify this regex to guarantee that it will only match a non-zero length string?

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • advanced winform framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • Cannot call DLL import entry in C# from C++ project. EntryPointNotFoundException

    - by kriau
    I'm trying to call from C# a function in a custom DLL written in C++. However I'm getting the warning during code analysis and the error at runtime: Warning: CA1400 : Microsoft.Interoperability : Correct the declaration of 'SafeNativeMethods.SetHook()' so that it correctly points to an existing entry point in 'wi.dll'. The unmanaged entry point name currently linked to is SetHook. Error: System.EntryPointNotFoundException was unhandled. Unable to find an entry point named 'SetHook' in DLL 'wi.dll'. Both projects wi.dll and C# exe has been compiled in to the same DEBUG folder, both files reside here. There is only one file with the name wi.dll in the whole file system. C++ function definition looks like: #define WI_API __declspec(dllexport) bool WI_API SetHook(); I can see exported function using Dependency Walker: as decorated: bool SetHook(void) as undecorated: ?SetHook@@YA_NXZ C# DLL import looks like (I've defined these lines using CLRInsideOut from MSDN magazine): [DllImport("wi.dll", EntryPoint = "SetHook", CallingConvention = CallingConvention.Cdecl)] [return: MarshalAsAttribute(UnmanagedType.I1)] internal static extern bool SetHook(); I've tried without EntryPoint and CallingConvention definitions as well. Both projects are 32-bits, I'm using W7 64 bits, VS 2010 RC. I believe that I simply have overlooked something.... Thanks in advance.

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • Constructor being called again?

    - by Halo
    I have this constructor; public UmlDiagramEntity(ReportElement reportElement, int pageIndex, Controller controller) { super(reportElement.getX1(), reportElement.getY1(), reportElement.getX2(), reportElement.getY2()); setLayout(null); this.pageIndex = pageIndex; this.controller = controller; reportElements = reportElement.getInternalReportElements(); components = new ArrayList<AbstractEntity>(); changedComponentIndex = -1; PageListener p = new PageListener(); this.addMouseMotionListener(p); this.addMouseListener(p); setPage(); } And I have an update method in the same class; @Override public void update(ReportElement reportElement) { if (changedComponentIndex == -1) { super.update(reportElement); } else { reportElements = reportElement.getInternalReportElements(); if (components.size() == reportElements.size()) { if (!isCommitted) { if (reportElement.getType() == ReportElementType.UmlRelation) { if (checkInvolvementAndSet(changedComponentIndex)) { anchorEntity(changedComponentIndex); } else { resistChanges(changedComponentIndex); } return; } } ..................goes on When I follow the flow from the debugger, I see that when update is called, somewhere in the method, the program goes into the constructor and executes it all over again (super, pageIndex, etc.). Why does it go to the constructor :D I didn't tell it to go there. I can make a deeper analysis and see where it goes to the constructor if you want. By the way, changedComponentIndex is a static variable.

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • How do I generate reports in R?

    - by Maiasaura
    I've been struggling for a week now trying to figure out how to generate reports in R using either Sweave or Brew. I should say right at the beginning that I have never used Tex before but I understand the logic of it. I have read this document several times. However, I cannot even get a simple example to parse. Brew successfully converts a simple markup file (just a title and some text) to a .tex file (no error). But it never ever converts tex to a pdf. > library(tools) > library(brew) > brew("population.brew", "population.tex") > texi2dvi("population.tex", pdf = TRUE) The last step always fails with: Error in texi2dvi("population.tex", pdf = TRUE) : Running 'texi2dvi' on 'population.tex' failed. What am I doing wrong? The report I am trying to build is fairly simple. I have 157 different analysis to summarize. Each one has 4 plots, 1 table and a summary. I just want output plot 1,2,3,4 output table \pagebreak ... that's it. Can anyone help me get further? I use osx, don't have Tex installed. thanks

    Read the article

  • How do I use Spreadsheet::WriteExcel to create a chart from numeric log data?

    - by yaohung
    I used csv2xls.pl to convert a text log into .xls format, and then I create a chart as in the following: my $chart3 = $workbook->add_chart( type => 'line' , embedded => 1); # Configure the series. $chart3->add_series( categories => '=Sheet1!$B$2:$B$64', values => '=Sheet1!$C$2:$C$64', name => 'Test data series 1', ); # Add some labels. $chart3->set_title( name => 'Bridge Rate Analysis' ); $chart3->set_x_axis( name => 'Packet Size ' ); $chart3->set_y_axis( name => 'BVI Rate' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'G2', $chart3 ); I can see the chart in the .xls file. However, all the data are in text format, not numeric, so the chart looks wrong. How do I convert text into number before applying this create-chart function? Also, how do I sort the .xls file before creating the chart?

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >