Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 374/678 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Is there a way to force ContourPlot re-check all the points on the each stage of it's recursion algorithm?

    - by Alexey Popkov
    Hello, Thanks to this excellent analysis of the Plot algorithm by Yaroslav Bulatov, I now understand the reason why Plot3D and ContourPlot fail to draw smoothly functions with breaks and discontinuities. For example, in the following case ContourPlot fails to draw contour x^2 + y^2 = 1 at all: ContourPlot[Abs[x^2 + y^2 - 1], {x, -1, 1}, {y, -1, 1}, Contours -> {0}] It is because the algorithm does not go deeply into the region near x^2 + y^2 = 1. It "drops" this region on an initial stage and do not tries to investigate it further. Increasing MaxRecursion does nothing in this sense. And even undocumented option Method -> {Refinement -> {ControlValue -> .01 \[Degree]}} does not help (but makes Plot3D a little bit smoother). The above function is just a simple example. In real life I'm working with very complicated implicit functions that cannot be solved analytically. Is there a way to get ContourPlot to go deeply into such regions near breaks and discontinuities?

    Read the article

  • How do I generate reports in R?

    - by Maiasaura
    I've been struggling for a week now trying to figure out how to generate reports in R using either Sweave or Brew. I should say right at the beginning that I have never used Tex before but I understand the logic of it. I have read this document several times. However, I cannot even get a simple example to parse. Brew successfully converts a simple markup file (just a title and some text) to a .tex file (no error). But it never ever converts tex to a pdf. > library(tools) > library(brew) > brew("population.brew", "population.tex") > texi2dvi("population.tex", pdf = TRUE) The last step always fails with: Error in texi2dvi("population.tex", pdf = TRUE) : Running 'texi2dvi' on 'population.tex' failed. What am I doing wrong? The report I am trying to build is fairly simple. I have 157 different analysis to summarize. Each one has 4 plots, 1 table and a summary. I just want output plot 1,2,3,4 output table \pagebreak ... that's it. Can anyone help me get further? I use osx, don't have Tex installed. thanks

    Read the article

  • How do I use Spreadsheet::WriteExcel to create a chart from numeric log data?

    - by yaohung
    I used csv2xls.pl to convert a text log into .xls format, and then I create a chart as in the following: my $chart3 = $workbook->add_chart( type => 'line' , embedded => 1); # Configure the series. $chart3->add_series( categories => '=Sheet1!$B$2:$B$64', values => '=Sheet1!$C$2:$C$64', name => 'Test data series 1', ); # Add some labels. $chart3->set_title( name => 'Bridge Rate Analysis' ); $chart3->set_x_axis( name => 'Packet Size ' ); $chart3->set_y_axis( name => 'BVI Rate' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'G2', $chart3 ); I can see the chart in the .xls file. However, all the data are in text format, not numeric, so the chart looks wrong. How do I convert text into number before applying this create-chart function? Also, how do I sort the .xls file before creating the chart?

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • Accurately and securely measure the time spent viewing a web page

    - by balpha
    Suppose the following: You have a web page that presents a simple game to a user (e.g. a quiz, a puzzle, etc). The user solves the puzzle, submits the result, and you want to measure as precisely as possible how long they took to solve it. Assume it's quite simple, so we're talking seconds, not hours. Also assume JavaScript is required anyway, so there's no need to think of JS-disabled browsers. Finally, assume we don't want to use anything like Flash, Silverlight, or the like. I can think of several techniques: Simply take the time between the points when the data was sent from the server and when the submission arrives. Since this is exclusively server-side, there's no chance for cheating. However, issues like network latency and page rendering time might make this unfair for users with slow computers / browsers / internet connections. On the first request, just send the page without the actual game data. When everything is loaded so far, retrieve the game data through an AJAX call and populate it into the page. This is similar to 1., but reduces some of the caveats introduced through time spent on overhead. Have the time measured on the client side using JavaScript and submitted alongside with the solution. This would theoretically be the most accurate, but it introduces the possibility of cheating, because you're relying on client data. Use the request time headers of a "ready to play" AJAX call and the result submission request. Same caveat as 3., as it is still client data. A combination of server side and client side measuring with some kind of plausibility analysis. I can't think of a good way, but maybe you can. Thoughts? Other ideas?

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • problem with implementing a simple work queue

    - by John Deerikio
    Hi all, I am having troubles with implementing a simple work queue. Doing some analysis, I am facing a subtle problem. The work queue is backed by a regular linked list. The code looks like this (simplified): 0. while (true) 1. while (enabled == true) 2. acquire lock on the list and get the next action to be executed (blocking operation) (store it in a local variable) 3. execute the action (outside the lock on the list on previous line) 4. get lock on this work queue 5. wait until this work queue has been notified (triggered when setEnabled(true) has been callled) The setEnabled(e) operation looks like this (simplified): enabled = e if (enabled == true) acquire lock on this work queue and do notify() Although this works, there is a condition in which a deadlock occurs. It happens in the following rare situation: while an action is being executed, setEnabled(false) is called just before step (4) is entered, setEnabled(true) is called now step (5) keeps waiting forever, because this work queue has already been notified How do I solve this? I have been looking at this for some time, but I cannot come up with a solution. Please note I am fairly new to thread synchronization. Thanks a lot.

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • WSDL first for existing service layer

    - by Jurgen H
    I am working on an existing Java project with a typical services - dao setup for which only a webapplication was available. My job is to add webservices on top of the services layer, but the webservices have their own functional analysis and datamodel. The functional analyses ofcource focuses on what is possible in the different service methods. As good practice demands, we used the WSDL first strategy and generated JAXB bound Java classes and a SEI for the webservices. After having implemented the webservices partially, we noticed a 70% match between the datamodel. This resulted in writing converters which take the webservice JAXB classes and map them with the service layer classes. Customer customer = new Customer(); customer.setName(wsCustomer.getName()); customer.setFirstName(wsCustomer.getFirstName(); .. This is a very obvious example, some other mappings where little more complicated. Can anyone give his best practices, experiences, solutions to this kind of situations? Are any of these frameworks usefull? http://transmorph.sourceforge.net/wiki/index.php/Main_Page http://ezmorph.sourceforge.net/ Please don't start a discussion about WSDL first vs code first.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • Advanced All In One .NET Framework (should i go for a software factory ?)

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration logging screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... At this point i have 3 options : Build from scratch on top of clr :( Find functionnalities among several open source framework and use them as a stack for infrastucture Find a "software factory" I know its lot but i really would like to use existing code to build upon so i can focus on business rules. What open source tools would u use to achieve these ?

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Organising UI code in .NET forms

    - by sb3700
    Hi I'm someone who has taught myself programming, and haven't had any formal training in .NET programming. A while back, I started C# in order to develop a GUI program to control sensors, and the project has blossomed. I was just wondering how best to organise the code, particularly UI code, in my forms. My forms currently are a mess, or at least seem a mess to me. I have a constructor which initialises all the parameters and creates events. I have a giant State property, which updates the Enabled state of all my form control as users progress through the application (ie: disconnected, connected, setup, scanning) controlled by a States enum. I have 3-10 private variables accessed through properties, some of which have side-effects in changing the values of form elements. I have a lot of "UpdateXXX" functions to handle UI elements that depend on other UI elements - ie: if a sensor is changed, then change the baud rate drop down list. They are separated into regions I have a lot of events calling these Update functions I have a background worker which does all the scanning and analysis. My problem is this seems like a mess, particularly the State property, and is getting unmaintainable. Also, my application logic code and UI code are in the same file and to some degree, intermingled which seems wrong and means I need to do a lot of scrolling to find what I need. How do you structure your .net forms? Thanks

    Read the article

  • google maps api keys to be set webserver-wide, (as env var? inside apache?)

    - by ~knb
    I have a web site with many virtual hosts and each registered with several domain names (ending in .org, .de), site1.mysite.de, site2.mysite.org Then I have different templating systems based on several programming languages (perl and php) in use on the web server. The Google Maps Api requires a unique Google Maps api key for each vhost. I want to have something like a web-server wide variable $goomapkey that I can call from inside my code. In PHP code, Now I have a kludgy case-analysis solution like $domain = substr($_SERVER['SERVER_NAME'], -3); if (".de" == $domain){ //if ("xxxxxx" eq substr($ENV{SERVER_NAME}, 0, 5)){ // $gookey = "ABQIAAA..."; //} else { //site1.de $gookey = "ABQIAAAA1Js..."; //} } elseif ("dev" == substr($_SERVER['SERVER_NAME'], 0, 3)){ //dev.mysite.org $gookey = "ABQIAAAA1JsSb..."; } else { //www.mysite.org $gookey = "ABQIAAAA1JsS..."; //TODO: Add more keys for each virtual host, for my.machinename.de, IP-address based URL, ... } ... inside my php-based CMS. A non-ideal solution, because it is, php-only, and I still have to set it at several html templates inside the CMS, and there are too many cases. I want the google maps api key to be set by the apache web server who examines the request *early in the request loop before any php page template code is constructed and evaluated. is an environment variable a good solution? which technology should be used to set the $goomapkey variable? I'd prefer mod_perl2 Apache request handler, but the documentation is confusing (many API changes in the past ). Which Apache module could I use? Is there a built-in Apache module that does the same thing?

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • What are salesforce.com and Apex like as an application development platform?

    - by mhollers
    I have recently discovered that salesforce.com is much more than an online CRM after coming across a Morrison's Case Study in which they develop a works management application. I've been trying it out with a view to recreating our own Works Management system on the platform. My background is in Microsoft and .Net, and the obvious 1st choice would be asp.net. However, there's only really myself with .net experience and my manager with a more legacy Synergy programming background, and I am self taught and am looking at evaluating other RAD options (eg Ironspeed). the nature of the business is in the main 2-5 concurrent construction type contracts that run for 3-5 yrs each, each requiring 15-50 system users. Traditionally we have used our character based Works Mangement system for everything and tweaked it for each contract. The Salesforce licensing model on the face of it suits this sort of flexibilty, but I'm worried about the development flexibilty/learning curve and all the issues that surround lock-in. There doesn't seem to be much neutral sober analysis of the platform on the web that isn't salesforce's own material/blogs Has anyone any experience of developing an application on salesforce as compared to the more 'traditional' .Net route?

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • Looking for: nosql (redis/mongodb) based event logging for Django

    - by Parand
    I'm looking for a flexible event logging platform to store both pre-defined (username, ip address) and non-pre-defined (can be generated as needed by any piece of code) events for Django. I'm currently doing some of this with log files, but it ends up requiring various analysis scripts and ends up in a DB anyway, so I'm considering throwing it immediately into a nosql store such as MongoDB or Redis. The idea is to be easily able to query, for example, which ip address the user most commonly comes from, whether the user has ever performed some action, lookup the outcome for a specific event, etc. Is there something that already does this? If not, I'm thinking of this: The "event" is a dictionary attached to the request object. Middleware fills in various pieces (username, ip, sql timing), code fills in the rest as needed. After the request is served a post-request hook drops the event into mongodb/redis, normalizing various fields (eg. incrementing the username:ip address counter) and dropping the rest in as is. Words of wisdom / pointers to code that does some/all of this would be appreciated.

    Read the article

  • Unknown ListView Behavior

    - by st0le
    I'm currently making a SMS Application in Android, the following is a code snippet from Inbox Listactivity, I have requested a cursor from the contentresolver and used a custom adapter to add custom views into the list. Now, in the custom view i've got 2 TextViews (tvFullBody,*tvBody*)... tvFullBody contains the Full SMS Text while tvBody contains a short preview (35 characters) The tvFullBody Visibility is by default set to GONE. My idea is, when the user clicks on a list item, the tvBody should dissappear(GONE) and the tvFullBody should become visible (VISIBLE). On Clicking again, it should revert back to its original state. //isExpanded is a BitSet of the size = no of list items...keeps track of which items are expanded and which are not @Override protected void onListItemClick(ListView l, View v, int position, long id) { if(isExpanded.get(position)) { v.findViewById(R.id.tvFullBody).setVisibility(View.GONE); v.findViewById(R.id.tvBody).setVisibility(View.VISIBLE); }else { v.findViewById(R.id.tvFullBody).setVisibility(View.VISIBLE); v.findViewById(R.id.tvBody).setVisibility(View.GONE); } isExpanded.flip(position); super.onListItemClick(l, v, position, id); } The Code works as it is supposed to :) except for an undesired sideeffect.... Every 10th (or so) List Item also gets "toggled". eg. If i Expand the 1st, then the 11th, 21th list items are also expanded...Although they remain off screen, but on scrolling you get to see the undesired "expansion". By my novice analysis, i'm guessing Listview keeps track of 10 list items that are currently visible, upon scrolling, it "reuses" those same variables, which is causing this problem...(i didn't check the android source code yet.) I'd be gratefull for any suggestion, on how i should tackle this! :) I'm open to alternative methods aswell....Thanks in advance! :)

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >