Search Results

Search found 519 results on 21 pages for 'configurable'.

Page 14/21 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Make JAXWS-based webservice implement interface and unmarshall to known POJOs

    - by John K
    Given a Java SE 6 client, I would like to provide a configurable back-end: either directly to a database or through a web service which connects to a centralized DB. To that end, I've created some JPA- and JAXB-annotated entity classes and a DAO interface in a POJO library like the following: public interface MyDaoInterface { public MyEntity doSomething(); } @javax.persistence.Entity @javax.xml.bind.annotation.XmlRootElement public class MyEntity { private int a; .... } Now, I would like to have my auto-generated web service stubs implement that interface and interact with my defined entity classes, rather than the generated classes provided via the JAX-B unmarshaller. So, the client-side pseudo code would be something like MyDaoInterface dao; if (usingWebservice) dao = new WebserviceDao(); else dao = new JpaDao(); MyEntity e = dao.doSomething(); Is this possible with JPA, JAXB, JAXWS? Is this even advisable? Currently we achieve this through a slow manual process of massaging code, copying generating classes, and doing other things that seem just plain wrong to me.

    Read the article

  • Specific programming text editor for simple open/close editing

    - by queen3
    I'm looking for very specific text editor: Closes on ESC, no project management or tabs Syntax highlighting - preferably with color themes (e.g. can apply different color themes without changing C# coloring definition) or, at least, can load/save themes; support for C/C#/XML/HTML/JavaScript/etc - common MS/.NET world - out of box Configurable keys, or: Shift-Tab shifts blocks XML/HTML auto-completion support - well, optional I use synplus plugin for Total Commander currently, but it has few drawbacks (e.g. crashes sometimes ;-), no auto-completion, etc). Basically I want fast Visual-Studio-like editor that I open, do edits, and then close using ESC. I remember I tried Notepad++, etc - most of them open files in tabs, don't close on ESC... - that is, behave like IDE. At least I've just downloaded Notepad++, it doesn't close on ESC even if I setup keybindings to do so. Autocompletion is optional (though it is to be simple as just tags completion), what I really look for is closing on ESC, not getting in the way with all the tabs and IDE-like, and good coloring. Plus shift-tab is must have for blocks manipulation. Update: any open-source one that I can easily tweak to close on ESC? ;-) Seems like ESC (and reasonable color highlighting) is the core requirement. I've just tried many editors - Programmer's Notepad, E, Crimson, etc - I can't set any of them to close on ESC. Any external tool to close selected program on ESC? ;-) UPDATE: Hm, found an awesome utility for my latest thought: http://www.autohotkey.com. Easy to setup to close any window on ESC (as well as many other tricks). Seems like the most tough requirements is gone - I can use ANY text editor ;-)

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • What is actually happening to this cancelled HTTP request?

    - by Brian Schroth
    When a user takes a particular action on a page, an AJAX call is made to save their data. Unfortunately, this call is synchronous as they need to wait to see if the data is valid before being allowed to continue. Obviously, this eliminates a lot of the benefit of using Asynchronous Javascript And XML, but that's a subject for another post. That's the design I'm working with. The request is made using the dojo.xhrPost function, with a 60s timeout parameter, and the error handler redirects to an error page. What I am finding in testing is that in Firefox, if I initiate the ajax request and then press ESC, the page hangs waiting for a response, and then eventually after exactly 90s (not 60s, the function's timeout), the error handler will kick in and redirect to the error page. I expected this to happen, but either immediately as soon as the request was cancelled, or after 60s due to the timeout value being 60s. What I don't understand is why is it 90s? What is actually happening under the hood when the user cancels their request in Firefox, and how does it differ from IE, where everything works fine exactly the same as if the request had not been cancelled? Is the 90s related to any user-configurable browser settings?

    Read the article

  • Can you have a Dynamic Data Field which consists of a list of fields?

    - by Telos
    This is a purely theoretical question (at least until I start trying to implement it) but here goes. I wrote a web form a long time ago which has a configurable section for getting information. Basically for some customers there are no fields, for other customers there are up to 20 fields. I got it working by dynamically creating the fields at just the right time in the page lifecycle and going through a lot of headaches. 2 years later, I need to make some pretty big updates to this web form and there are some nifty new technologies. I've worked with ASP.NET Dynamic Data just a bit and, well, I half-crazed plan just occurred to me: The Ticket object has a one-to-many relationship to ExtendedField, we'll call that relationship Fields for brevity. Using that, the idea would be to create a FieldTemplate that dynamically generated the list of fields and displayed it. The big questions here would probably be: 1) Can a single field template resolve to multiple web controls without breaking things? 2) Can dynamic data handle updating/inserting multiple rows in such a fashion? 3) There was a third question I had a few minutes ago, but coworkers interrupted me and I forgot. So now the third question is: what is the third question? So basically, does this sound like it could work or am I missing a better/more obvious solution?

    Read the article

  • Can I use a specific model from within a behavior in CakePHP?

    - by Paul Willy
    I'm trying to write a behavior that will give my models access to a simple workflow engine I've devised. The workflow engine itself works as a CakePHP model, with workflow data stored in the database just as any other model data is stored. Basically what I want to do is have the behavior use the workflow model whenever an action is called on the base model. For example, if the edit() action is executed for Posts, then the Post (with the behavior attached) will trigger the workflow behavior with its own model name, action, and id as arguments (e.g. [Post, edit, 1]). Then the behavior will invoke the functionality of the Workflow model, which has a record for what to do when edit is run on Posts (e.g. send e-mail to users who are subscribed to that post) and will carry that out. My question is, what is the proper way to invoke model/controller methods from within the behavior? The model to be used from within the behavior will always be Workflow, but the behavior should be usable from basically any model (aside from Workflow itself). I know I could run SQL queries directly from the behavior, but of course this is not the Cake way :-) Or, am I going about this in the wrong way? I want to store a certain amount of logic in the database so that it is easily configurable by different users, and not have endless configuration checks within the model/controller logic itself so that workflow steps can be easily added/changed/removed in the future.

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • How to parse text as JavaScript?

    - by Danjah
    This question of mine (currently unanswered), drove me toward finding a better solution to what I'm attempting. My requirements: chunks of code which can be arbitrarily added into a document, without an identifier: [div class="thing"] [elements... /] [/div] the objects are scanned for and found by an external script: var things = yd.getElementsBy(function(el){ return yd.hasClass('thing'); },null,document ); the objects must be individually configurable, what I have currently is identifier-based: [div class="thing" id="thing0"] [elements... /] [script type="text/javascript"] new Thing().init({ id:'thing0'; }); [/script] [/div] So I need to ditch the identifier (id="thing0") so there are no duplicates when more than one chunk of the same code is added to a page I still need to be able to config these objects individually, without an identifier SO! All of that said, I wondered about creating a dynamic global variable within the script block of each added chunk of code, within its script tag. As each 'thing' is found, I figure it would be legit to grab the innerHTML of the script tag and somehow convert that text into a useable JS object. Discuss. Ok, don't discuss if you like, but if you get the drift then feel free to correct my wayward thinking or provide a better solution - please! d

    Read the article

  • Practiaal rules for Django MiddleWare ordering?

    - by o_O Tync
    The official documentation is a bit messy: 'before' & 'after' are used for ordering MiddleWare in a tuple, but in some places 'before'&'after' refers to request-response phases. Also, 'should be first/last' are mixed and it's not clear which one to use as 'first'. I do understand the difference.. however it seems to complicated for a newbie in Django. Can you suggest some correct ordering for builtin MiddleWare classes (assuming we enable all of them) and — most importantly — explain WHY one goes before/after other ones? here's the list, with the info from docs I managed to find: UpdateCacheMiddleware Before those that modify 'Vary:' SessionMiddleware, GZipMiddleware, LocaleMiddleware GZipMiddleware Before any MW that may change or use the response body After UpdateCacheMiddleware: Modifies 'Vary:' ConditionalGetMiddleware Before CommonMiddleware: uses its 'Etag:' header when USE_ETAGS=True SessionMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' Before TransactionMiddleware: we don't need transactions here LocaleMiddleware, One of the topmost, after SessionMiddleware, CacheMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' After SessionMiddleware: uses session data CommonMiddleware Before any MW that may change the response (it calculates ETags) After GZipMiddleware so it won't calculate an E-Tag on gzipped contents Close to the top: it redirects when APPEND_SLASH or PREPEND_WWW CsrfViewMiddleware AuthenticationMiddleware After SessionMiddleware: uses session storage MessageMiddleware After SessionMiddleware: can use Session-based storage XViewMiddleware TransactionMiddleware After MWs that use DB: SessionMiddleware (configurable to use DB) All *CacheMiddleWare is not affected (as an exception: uses own DB cursor) FetchFromCacheMiddleware After those those that modify 'Vary:' if uses them to pick a value for cache hash-key After AuthenticationMiddleware so it's possible to use CACHE_MIDDLEWARE_ANONYMOUS_ONLY FlatpageFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) RedirectFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) (I will add suggestions to this list to collect all of them in one place)

    Read the article

  • wrapping user controls in a transaction

    - by Hans Gruber
    I'm working on heavily dynamic and configurable CMS system. Therefore, many pages are composed of a dynamically loaded set of user controls. To enable loose coupling between containers (pages) and children (user controls), all user controls are responsible for their own persistence. Each User Control is wired up to its data/service layer dependencies via IoC. They also implement an IPersistable interface, which allows the container .aspx page to issue a Save command to its children without knowledge of the number or exact nature of these user controls. Note: what follows is only pseudo-code: public class MyUserControl : IPersistable, IValidatable { public void Save() { throw new NotImplementedException(); } public bool IsValid() { throw new NotImplementedException(); } } public partial class MyPage { public void btnSave_Click(object sender, EventArgs e) { foreach (IValidatable control in Controls) { if (!control.IsValid) { throw new Exception("error"); } } foreach (IPersistable control in Controls) { if (!control.Save) { throw new Exception("error"); } } } } I'm thinking of using declarative transactions from the System.EnterpriseService namespace to wrap the btnSave_Click in a transaction in case of an exception, but I'm not sure how this might be achieved or any pitfalls to such an approach.

    Read the article

  • RPC for java/python with rest support, HTML monitoring and goodies

    - by Ran
    Here's my set of requirements: I'm looking for an RPC framework such as thrift, avro, protobuf (when adding services to it) which supports: Easy and intuitive IDL. No serial numbers, no manual versioning, simple... avro is a good example for this. Works with Java and Python Supports both fast binary prorocol, as well as HTTP based restful style. I'd like to be able to use it for both backend-to-backend communication (java-java or python-java) as well as frontend-to-backend communication (javascript to java). The rest support needs to include &param=value input as get/post requests (configurable per request) and output in three possible formats: json, jsonp, XML. Compact, fast, backward compatible, easy to upgrade etc... Provides some nice monitoring interfaces such as: JMX, web page status reports (e.g. packets in, packets out, error rate etc) Ops friendly... no need to take the whole site down to release new versions Both sync and asyc communication ... other goodies are welcome... Is there something out there? So far I've looked at thrift and avro and they are both nice in some ways, but don't check all my list. Thanks

    Read the article

  • wrapping aspx user controls commands in a transaction

    - by Hans Gruber
    I'm working on heavily dynamic and configurable CMS system. Therefore, many pages are composed of a dynamically loaded set of user controls. To enable loose coupling between containers (pages) and children (user controls), all user controls are responsible for their own persistence. Each User Control is wired up to its data/service layer dependencies via IoC. They also implement an IPersistable interface, which allows the container .aspx page to issue a Save command to its children without knowledge of the number or exact nature of these user controls. Note: what follows is only pseudo-code: public class MyUserControl : IPersistable, IValidatable { public void Save() { throw new NotImplementedException(); } public bool IsValid() { throw new NotImplementedException(); } } public partial class MyPage { public void btnSave_Click(object sender, EventArgs e) { foreach (IValidatable control in Controls) { if (!control.IsValid) { throw new Exception("error"); } } foreach (IPersistable control in Controls) { if (!control.Save) { throw new Exception("error"); } } } } I'm thinking of using declarative transactions from the System.EnterpriseService namespace to wrap the btnSave_Click in a transaction in case of an exception, but I'm not sure how this might be achieved or any pitfalls to such an approach.

    Read the article

  • I seek a PDF paginator

    - by Cameron Laird
    More precisely, I need software that will allow me to consume existing PDF instances, decorate them with page numbers, or page-number-like writing, then write them back to the filesystem. I'll happily pay for an application, or program it myself. I almost certainly require the software run under Linux (Ubuntu, more specifically). iText comes close. iText certainly can read existing PDF instances. While iText is, for this purpose, only a library, and requires me to program a tiny amount to specify where on the page the numbers should appear, I'm happy to do that. I hesitate with iText only because the latest iText license is a headache at certain government agencies (in practice, I'd probably request and pay for a special license), and because, over the last few years, I've observed cases where iText doesn't appear to keep up with the standard, that is, it has more troubles than I expect reading PDFs observed "in the wild". Similarly, every other possibility I know has at least one difficulty: ReportLab would likely require a disproportionate licensing fee for the small value it provides in this situation, and so on. This application requires no particular sophistication with Unicode, fonts, ... I recognize there are plenty of executables and libraries that do some or all of what I require. I welcome any tips on software that is reliable, generally current with PDF practice, flexible/programmable/configurable/..., and "automatable". In the absence of any new insight, I'll likely go with a specific open-source library I don't want to mention now for which I've already contracted enhancements, or perhaps revisit iText.

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • NHibernate / Fluent - Mapping multiple objects to single lookup table

    - by Al
    Hi all I am struggling a little in getting my mapping right. What I have is a single self joined table of look up values of certain types. Each lookup can have a parent, which can be of a different type. For simplicities sake lets take the Country and State example. So the lookup table would look like this: Lookups Id Key Value LookupType ParentId - self joining to Id base class public class Lookup : BaseEntity { public Lookup() {} public Lookup(string key, string value) { Key = key; Value = value; } public virtual Lookup Parent { get; set; } [DomainSignature] [NotNullNotEmpty] public virtual LookupType LookupType { get; set; } [NotNullNotEmpty] public virtual string Key { get; set; } [NotNullNotEmpty] public virtual string Value { get; set; } } The lookup map public class LookupMap : IAutoMappingOverride<DBLookup> { public void Override(AutoMapping<Lookup> map) { map.Table("Lookups"); map.References(x => x.Parent, "ParentId").ForeignKey("Id"); map.DiscriminateSubClassesOnColumn<string>("LookupType").CustomType(typeof(LookupType)); } } BASE SubClass map for subclasses public class BaseLookupMap : SubclassMap where T : DBLookup { protected BaseLookupMap() { } protected BaseLookupMap(LookupType lookupType) { DiscriminatorValue(lookupType); Table("Lookups"); } } Example subclass map public class StateMap : BaseLookupMap<State> { protected StateMap() : base(LookupType.State) { } } Now I've almost got my mappings set, however the mapping is still expecting a table-per-class setup, so is expecting a 'State' table to exist with a reference to the states Id in the Lookup table. I hope this makes sense. This doesn't seem like an uncommon approach when wanting to keep lookup-type values configurable. Thanks in advance. Al

    Read the article

  • Recommendations for jQuery tooltips

    - by jerome
    I am looking for tooltip plugins for jQuery that would allow for the following type of behavior. <a href="somewhere.html"> <span> <img src="someimage.jpg" style="display: none;" /> Here is the tooltip content. </span> Here is the link to somewhere. </a> The behavior that I am hoping for is to hover over "Here is the link to somewhere" and have a tooltip pop up showing the content of the span containing "someimage.jpg" and "Here is the tooltip content". I would prefer that the tooltip track along with the mouse's movement over the link and that the tooltip's appearace (background color, opacity, border color, etc.) be configurable. The two most popular tooltips that I have found, "clueTip" and Jörn Zaefferer's "Tooltip" do not seem to fit the bill, unless I am missing something. Ultimately, the links and images will be dynamically generated.

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Microsoft JScript runtime error Object doesn't support this property or method

    - by Darxval
    So i am trying to call this function in my javascript but it gives me the error of "Microsoft JScript runtime error Object doesn't support this property or method" and i cant figure out why. It is occuring when trying to call hmacObj.getHMAC. This is from the jsSHA website: http://jssha.sourceforge.net/ to use the hmac-sha1 algorithm encryption. Thank you! hmacObj = new jsSHA(signature_base_string,"HEX"); signature = hmacObj.getHMAC("hgkghk","HEX","SHA-1","HEX"); Above this i have copied the code from sha.js snippet: function jsSHA(srcString, inputFormat) { /* * Configurable variables. Defaults typically work */ jsSHA.charSize = 8; // Number of Bits Per character (8 for ASCII, 16 for Unicode) jsSHA.b64pad = ""; // base-64 pad character. "=" for strict RFC compliance jsSHA.hexCase = 0; // hex output format. 0 - lowercase; 1 - uppercase var sha1 = null; var sha224 = null; The function it is calling (inside of the jsSHA function) (snippet) this.getHMAC = function (key, inputFormat, variant, outputFormat) { var formatFunc = null; var keyToUse = null; var blockByteSize = null; var blockBitSize = null; var keyWithIPad = []; var keyWithOPad = []; var lastArrayIndex = null; var retVal = null; var keyBinLen = null; var hashBitSize = null; // Validate the output format selection switch (outputFormat) { case "HEX": formatFunc = binb2hex; break; case "B64": formatFunc = binb2b64; break; default: return "FORMAT NOT RECOGNIZED"; }

    Read the article

  • Cannot change font size /type in plots

    - by Sameet Nabar
    I recently had to re-install my operating system (Ubuntu). The only thing I did differently is that I installed Matlab on a separate partition, not the main Ubuntu partition. After re-installing, the fonts in my plots are no longer configurable. For example, if I ask the title font to be bold, it doesn't happen. I ran the sample code below on my computer and then on my colleague's computer and the 2 results are attached. This cannot be a problem with the code; rather in the settings of Matlab. Could somebody please tell me what settings I need to change? Thanks in advance for your help. Regards, Sameet. x1=-pi:.1:pi; x2=-pi:pi/10:pi; y1=sin(x1); y2=tan(sin(x2)) - sin(tan(x2)); [AX,H1,H2]=plotyy(x1,y1,x2,y2); xlabel ('Time (hh:mm)'); ylabel (AX(1), 'Plot1'); ylabel (AX(2), 'Plot2'); axes(AX(2)) set(H1,'linestyle','none','marker','.'); set(H2,'linestyle','none','marker','.'); title('Plot Title','FontWeight','bold'); set(gcf, 'Visible', 'off'); [legh, objh] = legend([H1 H2],'Plot1', 'Plot2','location','Best'); set(legend,'FontSize',8); print -dpng Trial.png; Bad image: http://imageshack.us/photo/my-images/708/trial1u.png/ Good image: http://imageshack.us/photo/my-images/87/trial2.png/

    Read the article

  • Use LINQ, to Sort and Filter items in a List<ReturnItem> collection, based on the values within a Li

    - by Daniel McPherson
    This is tricky to explain. We have a DataTable that contains a user configurable selection of columns, which are not known at compile time. Every column in the DataTable is of type String. We need to convert this DataTable into a strongly typed Collection of "ReturnItem" objects so that we can then sort and filter using LINQ for use in our application. We have made some progress as follows: We started with the basic DataTable. We then process the DataTable, creating a new "ReturnItem" object for each row This "ReturnItem" object has just two properties: ID ( string ) and Columns( List(object) ). The properties collection contains one entry for each column, representing a single DataRow. Each property is made Strongly Typed (int, string, datetime, etc). For example it would add a new "DateTime" object to the "ReturnItem" Columns List containing the value of the "Created" Datatable Column. The result is a List(ReturnItem) that we would then like to be able to Sort and Filter using LINQ based on the value in one of the properties, for example, sort on "Created" date. We have been using the LINQ Dynamic Query Library, which gets us so far, but it doesn't look like the way forward because we are using it over a List Collection of objects. Basically, my question boils down to: How can I use LINQ, to Sort and Filter items in a List(ReturnItem) collection, based on the values within a List(object) property which is part of the ReturnItem class?

    Read the article

  • iPhone: Same Rows Repeated in Each Section of Grouped UITableview

    - by Rank Beginner
    I have an app that is a list of tasks, like a to do list. The user configures the tasks and that goes to the SQLite db. The list is displayed in a tableview. The SQL table in question consists of a taskid int, groupname varchar, taskname varchar, lastcompleted datetime, nextdue datetime, weighting integer. I currently have it working by creating an array from each column in the SQL table. In the tableView:cellForRowAtIndexPath: method, I create the controls for each task by binding their values to the array for each column. I want to add configurable task groups that should display as the section titles. I got the task groups to display as the section headers. My problem is that all the task rows are repeated in each group under each header. How do I get the correct rows to show up only under the correct section? I'm really new to development period and took on a hobby of trying to teach myself how to develop iphone apps. So, pretty please, be a little more detailed than you normally would with a professional developer. :)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >