Search Results

Search found 2950 results on 118 pages for 'critical skill'.

Page 96/118 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Setting System.Drawing.Color through .NET COM Interop

    - by Maxim
    I am trying to use Aspose.Words library through COM Interop. There is one critical problem: I cannot set color. It is supposed to work by assigning to DocumentBuilder.Font.Color, but when I try to do it I get OLE error 0x80131509. My problem is pretty much like this one: http://bit.ly/cuvWfc update: Code Sample: from win32com.client import Dispatch Doc = Dispatch("Aspose.Words.Document") Builder = Dispatch("Aspose.Words.DocumentBuilder") Builder.Document = Doc print Builder.Font.Size print Builder.Font.Color Result: 12.0 Traceback (most recent call last): File "aaa.py", line 6, in <module> print Builder.Font.Color File "D:\Python26\lib\site-packages\win32com\client\dynamic.py", line 501, in __getattr__ ret = self._oleobj_.Invoke(retEntry.dispid,0,invoke_type,1) pywintypes.com_error: (-2146233079, 'OLE error 0x80131509', None, None) Using something like Font.Color = 0xff0000 fails with same error message While this code works ok: using Aspose.Words; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Document doc = new Document(); DocumentBuilder builder = new DocumentBuilder(doc); builder.Font.Color = System.Drawing.Color.Blue; builder.Write("aaa"); doc.Save("c:\\1.doc"); } } } So it looks like COM Interop problem.

    Read the article

  • Troubleshooting a COM+ application deadlock

    - by Chris Karcher
    I'm trying to troubleshoot a COM+ application that deadlocks intermittently. The last time it locked up, I was able to take a usermode dump of the dllhost process and analyze it using WinDbg. After inspecting all the threads and locks, it all boils down to a critical section owned by this thread: ChildEBP RetAddr Args to Child 0deefd00 7c822114 77e6bb08 000004d4 00000000 ntdll!KiFastSystemCallRet 0deefd04 77e6bb08 000004d4 00000000 0deefd48 ntdll!ZwWaitForSingleObject+0xc 0deefd74 77e6ba72 000004d4 00002710 00000000 kernel32!WaitForSingleObjectEx+0xac 0deefd88 75bb22b9 000004d4 00002710 00000000 kernel32!WaitForSingleObject+0x12 0deeffb8 77e660b9 000a5cc0 00000000 00000000 comsvcs!PingThread+0xf6 0deeffec 00000000 75bb21f1 000a5cc0 00000000 kernel32!BaseThreadStart+0x34 The object it's waiting on is an event: 0:016> !handle 4d4 f Handle 000004d4 Type Event Attributes 0 GrantedAccess 0x1f0003: Delete,ReadControl,WriteDac,WriteOwner,Synch QueryState,ModifyState HandleCount 2 PointerCount 4 Name <none> No object specific information available As far as I can tell, the event never gets signaled, causing the thread to hang and hold up several other threads in the process. Does anyone have any suggestions for next steps in figuring out what's going on? Now, seeing as the method is called PingThread, is it possible that it's trying to ping another thread in the process that's already deadlocked? UPDATE This actually turned out to be a bug in the Oracle 10.2.0.1 client. Although, I'm still interested in ideas on how I could have figured this out without finding the bug in Oracle's bug database.

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • Is a VCS appropriate for usage by a designer?

    - by iconiK
    I know that a VCS is absolutely critical for a developer to increase productivity and protect the code, no doubts about it. But what about a designer, using say, Photoshop (though it's not specific to any tools, just to make my point clearer). VCSs uses delta compression to store different versions of files. This works very well for code, but for images, that's a problem. Raster image files are binary formats, though vector image files are text (SVG comes to my mind) and pose to problem. The problem comes with .psd files (and any other image "source" file) - those can get pretty big and since I'm not familiar with the format, I'll consider them as binary files. How would a VCS work in this condition? The repository could be pretty darned big if the VCS server isn't able to diff the files efficiently (or worse, not at all) and over time this can become a really big pain when someone needs to check out the repository (or clone it if using a DVCS). Have any of you used a VCS for this purpose? How well does it work? I'm mostly interested in Mercurial, though this is a general situation that applies to any VCS.

    Read the article

  • Majordomo/Mailman - Yahoo/Google Groups

    - by tom smith
    Hi. Not an app question, but thought that I might ask here anyway! The responses might help someone. I've got an app where we're going to be dealing with 5000-10000 people in a group/pool. Periodically, different subsets of people will break off in their own group. I'm looking for thoughts on how to manage/approach this situation. (For now, all of this is being managed on a few servers in the garage, with dynamic IP) I've looked into the Yahoo/google groups, and they seem to be reasonable. The primary issue that I see with this approach, is that I don't have a good way of quickly/easily alowing a subset of the group to form their own group for a given project. This kind of function is critical. The upside to this though, I wouldn't have to really set anything up. And the hosted groups could send emails to the user all day along, without running into cap/bandwidth limits The other approach is a managed list, like mailman/majordomo. This approach appears to be more flexible, and looks like ti could be modified to handle the quick creation of lists, allowing users to quickly be assigned to different lists on the fly.. The downside, I'd have to run my own Mailman/Majordomo instance, as well as the associated mail server. Or I could look at possibly using one of the hosted service.. But this is for a project on the cheap, so we're really trying to keep costs down. Thoughts/Comments/Pointers would be greatly appreciated. Thanks in advance -tom

    Read the article

  • WPF animation/UI features performance and benchmarking

    - by Rich
    I'm working on a relatively small proof-of-concept for some line of business stuff with some fancy WPF UI work. Without even going too crazy, I'm already seeing some really poor performance when using a lot of the features that I thought were the main reason to consider WPF for UI building in the first place. I asked a question on here about why my animation was being stalled the first time it was run, and at the end what I found was that a very simple UserControl was taking almost half a second just to build its visual tree. I was able to get a work around to the symptom, but the fact that it takes that long to initialize a simple control really bothers me. Now, I'm testing my animation with and without the DropShadowEffect, and the result is night and day. A subtle drop shadow makes my control look so much nicer, but it completely ruins the smoothness of the animation. Let me not even start with the font rendering either. The calculation of my animations when the control has a bunch of gradient brushes and a drop shadow make the text blurry for about a full second and then slowly come into focus. So, I guess my question is if there are known studies, blog posts, or articles detailing which features are a hazard in the current version of WPF for business critical applications. Are things like Effects (ie. DropShadowEffect), gradient brushes, key frame animations, etc going to have too much of a negative effect on render quality (or maybe the combinations of these things)? Is the final version of WPF 4.0 going to correct some of these issues? I've read that VS2010 beta has some of these same issues and that they are supposed to be resolved by final release. Is that because of improvements to WPF itself or because half of the application will be rebuilt with the previous technology?

    Read the article

  • Ability to draw and record a signature as part of a form - iphone

    - by mustic
    Apolgies in advance for any errors.. new to this and am not a developer/programmer.. just have some basic unix experience. I have searched the web and struggled to find a solution to my problem when I stumbled onto this website which maybe suggested that there is a solution to my question. For work i use a windows mobile device because we have to get customers to sign and form after a customer visit. the signature being very important. On the windows device i use the notes application and am able to record details and obtain/record (using draw) a customer signature. the form is then emailed back to HQ. The format being used is a *.pwi I have downloaded and paid for several applications for my iphone which is my preferred device and cant quite find anything that does both. the critical bit here is to be able to take a signature on the phone, save the doc in a format such as .txt, .doc or .pdf where i can control the file name then be able to email back to HQ. Am i asking too much? I hope that makes sense.. Any help would be much appreciated many thanks in advance

    Read the article

  • How to build n-layered web architecture with PHP?

    - by Alex
    I have a description and design of a website and I need to redesign it to allow for new requirements. The website's purpose is the offering of government's contracts and bidding opportunities for different businesses.I'm dealing with the 3-tier architecture PHP website comprising of the user-interface tier(client's web browser),business logic layer(Apache web server with PHP engine in it and a couple of applications running within a web server as well) and a database layer(local mysql database). Now,i need to redesign it to su???rt distributed n-tier architecture and specify how I would go about it.After long hours of research i came to this solution: business logic should be separated into presentation and purely business logic tier to allow for n-layer architecture(user-interface,presentation tier,b.logic and data tier).I have decided to use ??? just for the presentation(since the original existing website is in PHP) and use it within apache web server.In the business logic i want to use J2?? implementation technology instead of implementing it in PHP(i.e using Zend app.server and smarty template) cz J2EE can provide much more essential container services which are essential for business logic,its robustness,maintainability and different critical business operations which will be carried out by the g?v?rnment's website.So,particularly,i want to use J??ss app.server with ?J? business objects in it which would provide all the b.logic in java and would interact with the database and so forth.In order to connect PHP on a web server with java on app.server i'm gonna use PHP/Java bridge API (or maybe Quercus or SOAP is better?).Finally,i have my data tier with mysql which will communicate with b.logic via JD??.Payment system application in ???.server is gonna use S??? to talk with credit card company. From your professional point of view,does it sound like a good way of redesigning the original website to allow for n-tier architecture considering the specifics of the website and the criticality of its operations?(payment system is included in it)or u would personally prefer to use PHP business objects for business logic as well instead of J2EE?If you have any wiser recommendation or some alternative,please let me know what is right or wrong in my current solution. H??? to hear your professional advice very s??n (I'm new to the area of web development) Thanks in advance

    Read the article

  • Comparing developer productivity tools

    - by Jacob
    I am getting ready to test developer productivity tools for our team (Coderush, Resharper, JustCode, etc). We're planning to roll the tool out the same time as we deploy Visual Studio 2010 and TFS. I've seen several posts discussing the merits of one tool versus another. However, I haven't been able to find any discussion of a methodology self evaluating. One approach I've seen is to use each tool for a month or so and decide which one you like best. This seems completely reasonable if I were evaluating it for myself, but since I'm evaluating for others as well, I'd like to do something more rigorous. I'm hoping to collect some ideas on how to do that. As a starting point, my thinking was to come up a list of 10-20 crucial features to compare, then prepare a matrix where I can compare the relative strengths of each product. Once the matrix is complete, we can make a well informed decision about which product aligns with out needs. Since I'm not a user of any product now, it's been difficult to figure out which features are critical and which are less so. Thank you!

    Read the article

  • Why does the MSCVRT library generate conflicts at link time?

    - by neuviemeporte
    I am building a project in Visual C++ 2008, which is an example MFC-based app for a static C++ class library I will be using in my own project soon. While building the Debug configuration, I get the following: warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library After using the recommended option (by adding "msvcrt" to the "Ignore specific library" field in the project linker settings for the Debug configuration), the program links and runs fine. However, I'd like to find out why this conflict occured, why do I have to ignore a critical library, and if I'm to expect problems later I if add the ignore, or what happens if I don't (because the program builds anyway). At the same time, the Release configuration warns: warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/OPT:ICF' specification warning LNK4098: defaultlib 'MSVCRTD' conflicts with use of other libs; use /NODEFAULTLIB:library I'm guessing that the "D" suffix means this is the debug version of the vc++ runtime, no idea why this gets used this time. Anyway, adding "msvcrtd" to the ignore field causes lots of link errors of the form: error LNK2001: unresolved external symbol imp_CrtDbgReportW Any insight greatly appreciated.

    Read the article

  • Locking semaphores in C problem sys/sem

    - by Vojtech R.
    Hi, I have problem with my functions. Sometimes two processes enter into critical section. I can't find problem in this after I spent 10 hours by debugging. On what I should aim? // lock semaphore static int P(int sem_id) { struct sembuf sem_b; sem_b.sem_num = 0; sem_b.sem_op = -1; /* P() */ sem_b.sem_flg = 0; if (semop(sem_id, &sem_b, 1) == -1) { print_error("semop in P", errno); return(0); } return(1); } // unlock semaphore static int V(int sem_id) { struct sembuf sem_b[1]; sem_b.sem_num = 0; sem_b.sem_op = 1; /* V() */ sem_b.sem_flg = 0; if (semop(sem_id, &sem_b, 1) == -1) { print_error("semop in V", errno); return(0); } return(1); } The action loop: int mutex; if ((mutex=semget(key, 1, 0666))>=0) { // semaphore exists } while(1) { P(mutex); assert(get_val(mutex)==0); (*action)++; V(mutex); } Many thanks

    Read the article

  • Shared value in parallel python

    - by Jonathan
    Hey all- I'm using ParallelPython to develop a performance-critical script. I'd like to share one value between the 8 processes running on the system. Please excuse the trivial example but this illustrates my question. def findMin(listOfElements): for el in listOfElements: if el < min: min = el import pp min = 0 myList = range(100000) job_server = pp.Server() f1 = job_server.submit(findMin, myList[0:25000]) f2 = job_server.submit(findMin, myList[25000:50000]) f3 = job_server.submit(findMin, myList[50000:75000]) f4 = job_server.submit(findMin, myList[75000:100000]) The pp docs don't seem to describe a way to share data across processes. Is it possible? If so, is there a standard locking mechanism (like in the threading module) to confirm that only one update is done at a time? l = Lock() if(el < min): l.acquire if(el < min): min = el l.release I understand I could keep a local min and compare the 4 in the main thread once returned, but by sharing the value I can do some better pruning of my BFS binary tree and potentially save a lot of loop iterations. Thanks- Jonathan

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • C# LINQ to XML missing space character.

    - by Fossaw
    I write an XML file "by hand", (i.e. not with LINQ to XML), which sometimes includes an open/close tag containing a single space character. Upon viewing the resulting file, all appears correct, example below... <Item> <ItemNumber>3</ItemNumber> <English> </English> <Translation>Ignore this one. Do not remove.</Translation> </Item> ... the reasons for doing this are various and irrelevent, it is done. Later, I use a C# program with LINQ to XML to read the file back and extract the record... XElement X_EnglishE = null; // This is CRAZY foreach (XElement i in Records) { X_EnglishE = i.Element("English"); // There is only one damned record! } string X_English = X_EnglishE.ToString(); ... and test to make sure it is unchanged from the database record. I detect a change, when processing Items where the field had the single space character... +E+ Text[3] English source has been altered: Was: >>> <<< Now: >>><<< ... the and <<< parts I added to see what was happening, (hard to see space characters). I have fiddled around with this but can't see why this is so. It is not absolutely critical, as the field is not used, (yet), but I cannot trust C# or LINQ or whatever is doing this, if I do not understand why it is so. So what is doing that and why?

    Read the article

  • What's the fastest way to get CRUD over CGI on a database handle in Perl?

    - by mithaldu
    TL;DR: Want to write CGI::CRUD::Simple (a minimalist interface module for CGI::CRUD), but I want to check first if i overlooked a module that already does that. I usually work with applications that don't have the niceties of having frameworks and such already in place. However, a while ago i found myself in a situation where i was asking myself: "Self, i have a DBI database handle and a CGI query object, isn't there a module somewhere that can use this to give me some CRUD so i can move on and work on other things instead of spending hours writing an interface?" A quick survey on CPAN gave me: CGI::Crud Catalyst::Plugin::CRUD Gantry::Plugins::CRUD Jifty::View::Declare::CRUD CatalystX::CRUD Catalyst::Controller::CRUD CatalystX::CRUD::REST Catalyst::Enzyme Now, I didn't go particularly in-depth when looking at these modules, but, safe the first one, they all seem to require the presence of some sort of framework. Please tell me if i was wrong and i can just plug any of those into a barebones CGI script. CGI::CRUD seemed to do exactly what i wanted, although it did insist on being used through a rather old and C-like script that must be acquired on a different site and then prodded in various ways and manners to produce something useful. I went with that and found that it works pretty neat and that it should be rather easy to write a simple and easy-to-use module that provides a very basic [dbh, cgi IN]-[html OUT] interface to it. However, as my previous survey was rather short and i may have been hasty in dismissing modules or missed others, i find myself wondering whether that would only be duplication of work already done. As such i ponder the question in the title. PS: I tend to be too short in some of my explanations and make too many assumptions that others think about things similarly as me, resulting in leaving out critical details. If you find yourself wondering just what exactly I am thinking about when i say CRUD, please poke me in comments and I'll amend the question.

    Read the article

  • Why do people say that Ruby is slow?

    - by stephen murdoch
    I like Ruby on Rails and I use it for all my web development projects. A few years ago there was a lot of talk about Rails being a memory hog and about how it didn't scale very well but these suggestions were put to bed by Gregg Pollack here. Lately though, I've been hearing people saying that Ruby itself is slow. Why is Ruby considered slow? I do not find Ruby to be slow but then again, I'm just using it to make simple CRUD apps and company blogs. What sort of projects would I need to be doing before I find Ruby becoming slow? Or is this slowness just something that affects all programming languages? What are your options as a Ruby programmer if you want to deal with this "slowness"? Which version of Ruby would best suit an application like Stack Overflow where speed is critical and traffic is intense? The questions are subjective, and I realise that architectural setup (EC2 vs standalone servers etc) makes a big difference but I'd like to hear what people think about Ruby being slow. Finally, I can't find much news on Ruby 2.0 - I take it we're a good few years away from that then?

    Read the article

  • Lightweight spinlocks built from GCC atomic operations?

    - by Thomas
    I'd like to minimize synchronization and write lock-free code when possible in a project of mine. When absolutely necessary I'd love to substitute light-weight spinlocks built from atomic operations for pthread and win32 mutex locks. My understanding is that these are system calls underneath and could cause a context switch (which may be unnecessary for very quick critical sections where simply spinning a few times would be preferable). The atomic operations I'm referring to are well documented here: http://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Atomic-Builtins.html Here is an example to illustrate what I'm talking about. Imagine a RB-tree with multiple readers and writers possible. RBTree::exists() is read-only and thread safe, RBTree::insert() would require exclusive access by a single writer (and no readers) to be safe. Some code: class IntSetTest { private: unsigned short lock; RBTree<int>* myset; public: // ... void add_number(int n) { // Aquire once locked==false (atomic) while (__sync_bool_compare_and_swap(&lock, 0, 0xffff) == false); // Perform a thread-unsafe operation on the set myset->insert(n); // Unlock (atomic) __sync_bool_compare_and_swap(&lock, 0xffff, 0); } bool check_number(int n) { // Increment once the lock is below 0xffff u16 savedlock = lock; while (savedlock == 0xffff || __sync_bool_compare_and_swap(&lock, savedlock, savedlock+1) == false) savedlock = lock; // Perform read-only operation bool exists = tree->exists(n); // Decrement savedlock = lock; while (__sync_bool_compare_and_swap(&lock, savedlock, savedlock-1) == false) savedlock = lock; return exists; } }; (lets assume it need not be exception-safe) Is this code indeed thread-safe? Are there any pros/cons to this idea? Any advice? Is the use of spinlocks like this a bad idea if the threads are not truly concurrent? Thanks in advance. ;)

    Read the article

  • iPhone multitouch - Some touches dispatch touchesBegan: but not touchesMoved:

    - by zkarcher
    I'm developing a multitouch application. One touch is expected to move, and I need to track its position. For all other touches, I need to track their beginnings and endings, but their movement is less critical. Sometimes, when 3 or more touches are active, my UIView does not receive touchesMoved: events for the moving touch. This problem is intermittent, and can always be reproduced after a few attempts: Touch the screen with 2 fingers. Touch the screen with another finger, and move this finger around. The moving finger always dispatches touchesBegan: and touchesEnded:, but sometimes does not dispatch any touchesMoved: events. Whenever the moving touch does not dispatch touchesMoved: events, I can force it to dispatch touchesMoved: if I move one of the other touches. This seems to "force" every touch to recheck its position, and I successfully receive a touchesMoved: event. However, this is clumsy. This bug is reproducible on both the iPhone 2G and 3GS models. My question is: How do I ensure that my moving touch dispatches touchesMoved: events? Does anyone have any experience with this issue? I've spent few fruitless days searching the web for answers. I found a post describing how to sync touch events with the VBL: http://www.71squared.com/2009/04/maingameloop-changes/ . However, this has not solved the problem. I really don't know how to proceed. Any help is appreciated!

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • How can I improve this design?

    - by klausbyskov
    Let's assume that our system can perform actions, and that an action requires some parameters to do its work. I have defined the following base class for all actions (simplified for your reading pleasure): public abstract class BaseBusinessAction<TActionParameters> : where TActionParameters : IActionParameters { protected BaseBusinessAction(TActionParameters actionParameters) { if (actionParameters == null) throw new ArgumentNullException("actionParameters"); this.Parameters = actionParameters; if (!ParametersAreValid()) throw new ArgumentException("Valid parameters must be supplied", "actionParameters"); } protected TActionParameters Parameters { get; private set; } protected abstract bool ParametersAreValid(); public void CommonMethod() { ... } } Only a concrete implementation of BaseBusinessAction knows how to validate that the parameters passed to it are valid, and therefore the ParametersAreValid is an abstract function. However, I want the base class constructor to enforce that the parameters passed are always valid, so I've added a call to ParametersAreValid to the constructor and I throw an exception when the function returns false. So far so good, right? Well, no. Code analysis is telling me to "not call overridable methods in constructors" which actually makes a lot of sense because when the base class's constructor is called the child class's constructor has not yet been called, and therefore the ParametersAreValid method may not have access to some critical member variable that the child class's constructor would set. So the question is this: How do I improve this design? Do I add a Func<bool, TActionParameters> parameter to the base class constructor? If I did: public class MyAction<MyParameters> { public MyAction(MyParameters actionParameters, bool something) : base(actionParameters, ValidateIt) { this.something = something; } private bool something; public static bool ValidateIt() { return something; } } This would work because ValidateIt is static, but I don't know... Is there a better way? Comments are very welcome.

    Read the article

  • Testing fault tolerant code

    - by Robert
    I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs. This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory: Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test Add code the application that will cause it to crash a certain know critical points My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.

    Read the article

  • Overwriting lines in file in C

    - by KáGé
    Hi, I'm doing a project on filesystems on a university operating systems course, my C program should simulate a simple filesystem in a human-readable file, so the file should be based on lines, a line will be a "sector". I've learned, that lines must be of the same length to be overwritten, so I'll pad them with ascii zeroes till the end of the line and leave a certain amount of lines of ascii zeroes that can be filled later. Now I'm making a test program to see if it works like I want it to, but it doesnt. The critical part of my code: file = fopen("irasproba_tesztfajl.txt", "r+"); //it is previously loaded with 10 copies of the line I'll print later in reverse order /* this finds the 3rd line */ int count = 0; //how much have we gone yet? char c; while(count != 2) { if((c = fgetc(file)) == '\n') count++; } fflush(file); fprintf(file, "- . , M N B V C X Y Í U Á É L K J H G F D S A Ú O P O I U Z T R E W Q Ó Ü Ö 9 8 7 6 5 4 3 2 1 0\n"); fflush(file); fclose(file); Now it does nothing, the file stays the same. What could be the problem? Thank you.

    Read the article

  • Find position of a node within a nodeset using xpath

    - by Phil Nash
    After playing around with position() in vain I was googling around for a solution and arrived at this older stackoverflow question which almost describes my problem. The difference is that the nodeset I want the position within is dynamic, rather than a contiguous section of the document. To illustrate I'll modify the example from the linked question to match my requirements. Note that each <b> element is within a different <a> element. This is the critical bit. <root <a <bzyx</b </a <a <bwvu</b </a <a <btsr</b </a <a <bqpo</b </a </root Now if I queried, using XPath for a/b I'd get a nodeset of the four <b> nodes. I want to then find the position within that nodeset of the node that contains the string 'tsr'. The solution in the other post breaks down here: count(a/b[.='tsr']/preceding-sibling::*)+1 returns 1 because preceding-sibling is navigating the document rather than the context node-set. Is it possible to work within the context nodeset?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >