Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 267/336 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • iPhone app. Creating a custom UIView that contains UITextField and UIButton.

    - by Dmitry Burchik
    Hi all. I am new to iPhone programming. And I have an issue. I need to create a custom user control that I will add to my UIScrollView dinamically. The control has an UITextField and an UIButton. See the code below: #import <UIKit/UIKit.h> @interface FieldWithValueControl : UIView { UITextField *txtTagName; UIButton *addButton; } @property (nonatomic, readonly) UITextField *txtTagName; @property (nonatomic, readonly) UIButton *addButton; @end #import "FieldWithValueControl.h" #define ITEM_SPACING 10 #define ITEM_HEIGHT 20 #define SWITCHBOX_WIDTH 100 #define SCREEN_WIDTH 320 #define ITEM_FONT_SIZE 14 #define TEXTBOX_WIDTH 150 @implementation FieldWithValueControl @synthesize txtTagName; @synthesize addButton; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code txtTagName = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, TEXTBOX_WIDTH, ITEM_HEIGHT)]; txtTagName.borderStyle = UITextBorderStyleRoundedRect; addButton = [UIButton buttonWithType:UIButtonTypeContactAdd]; [addButton setFrame:CGRectMake(ITEM_SPACING + TEXTBOX_WIDTH, 0, ITEM_HEIGHT, ITEM_HEIGHT)]; [addButton addTarget:self action:@selector(addButtonTouched:) forControlEvents:UIControlEventTouchUpInside]; [self addSubview:txtTagName]; [self addSubview:addButton]; } return self; } - (void)addButtonTouched:sender { UIButton *button = (UIButton*)sender; NSString *title = [button titleLabel].text; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [txtTagName release]; [addButton release]; [super dealloc]; } @end In my code I create an object of that class and add it to scrollView on form. FieldWithValueControl *newTagControl = (FieldWithValueControl*)[[FieldWithValueControl alloc] initWithFrame:CGRectMake(ITEM_SPACING, currentOffset + ITEM_SPACING, 0, 0)]; [scrollView addSubview:newTagControl]; The control looks fine, but if I click to the textbox or to the button nothing happens. Keyboard doesn't appear, the button is not clickable etc.

    Read the article

  • Prove correctness of unit test

    - by Timo Willemsen
    I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests For example, I have this class (not including the implementation, and I have simplified it) public class SimpleGraph(){ //Returns true on success public boolean addEdge(Vertex v1, Vertex v2) { ... } //Returns true on sucess public boolean addVertex(Vertex v1) { ... } } I also have created this unit tests @Test public void SimpleGraph_addVertex_noSelfLoopsAllowed(){ SimpleGraph g = new SimpleGraph(); Vertex v1 = new Vertex('Vertex 1'); actual = g.addVertex(v1); boolean expected = false; boolean actual = g.addEdge(v1,v1); Assert.assertEquals(expected,actual); } Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.). So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • Trying to write priority queue in Java but getting "Exception in thread "main" java.lang.ClassCastEx

    - by Dan
    For my data structure class, I am trying to write a program that simulates a car wash and I want to give fancy cars a higher priority than regular ones using a priority queue. The problem I am having has something to do with Java not being able to type cast "Object" as an "ArrayQueue" (a simple FIFO implementation). What am I doing wrong and how can I fix it? public class PriorityQueue<E> { private ArrayQueue<E>[] queues; private int highest=0; private int manyItems=0; public PriorityQueue(int h) { highest=h; queues = (ArrayQueue[]) new Object[highest+1]; <----problem is here } public void add(E item, int priority) { queues[priority].add(item); manyItems++; } public boolean isEmpty( ) { return (manyItems == 0); } public E remove() { E answer=null; int counter=0; do { if(!queues[highest-counter].isEmpty()) { answer = queues[highest-counter].remove(); counter=highest+1; } else counter++; }while(highest-counter>=0); return answer; } }

    Read the article

  • Concise description of how .h and .m files interact in objective c?

    - by RJ86
    I have just started learning objective C and am really confused how the .h and .m files interact with each other. This simple program has 3 files: Fraction.h #import <Foundation/NSObject.h> @interface Fraction : NSObject { int numerator; int denominator; } - (void) print; - (void) setNumerator: (int) n; - (void) setDenominator: (int) d; - (int) numerator; - (int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end Main.m #import <stdio.h> #import "Fraction.h" int main(int argc, char *argv[]) { Fraction *frac = [[Fraction alloc] init]; [frac setNumerator: 1]; [frac setDenominator: 3]; printf( "The fraction is: " ); [frac print]; printf( "\n" ); [frac release]; return 0; } From what I understand, the program initially starts running the main.m file. I understand the basic C concepts but this whole "class" and "instance" stuff is really confusing. In the Fraction.h file the @interface is defining numerator and denominator as an integer, but what else is it doing below with the (void)? and what is the purpose of re-defining below? I am also quite confused as to what is happening with the (void) and (int) portions of the Fraction.m and how all of this is brought together in the main.m file. I guess what I am trying to say is that this seems like a fairly easy program to learn how the different portions work with each other - could anyone explain in non-tech jargon?

    Read the article

  • How does one gets started with Winforms style applications on Win32?

    - by Billy ONeal
    EDIT: I'm extremely tired and frustrated at the moment -- please ignore that bit in this question -- I'll edit it in the morning to be better. Okay -- a bit of background: I'm a C++ programmer mostly, but the only GUI stuff I've ever done was on top of .NET's WinForms platform. I'm completely new to Windows GUI programming, and despite Petzold's excellent book, I'm extremely confused. Namely, it seems that most every reference on getting started with Win32 is all about drawing lines and curves and things -- a topic about which (at least at present time) I couldn't care less. I need a checked list box, a splitter, and a textbox -- something that would take less than 10 minutes to do in Winforms land. It has been recommended to me to use the WTL library, which provides an implementation of all three of these controls -- but I keep getting hung up on simple things, such as getting the damn controls to use the right font, and getting High DPI working correctly. I've spent two freaking days on this, and I can't help but think there has to be a better reference for these kinds of things than I've been able to find. Petzold's book is good, but it hasn't been updated since Windows 95 days, and there's been a LOT changed w.r.t. how applications should be correctly developed since it was published. I guess what I'm looking for is a modern Petzold book. Where can I find such a resource, if any?

    Read the article

  • Orientation issue while presenting Modal ViewController

    - by Jacky Boy
    Current scenario: Right now I am showing a UIViewController using a segue with the style Modal and presentation Sheet. This Modal gets its superview bounds change, in order to have the dimensions I want, like this: - (void)viewWillLayoutSubviews { [super viewWillLayoutSubviews]; self.view.superview.bounds = WHBoundsRect; } The only allowed orientations are UIInterfaceOrientationLandscapeLeft and UIInterfaceOrientationLandscapeRight. Since the Modal has some TextFields and the keyboard would be over the Modal itself, I am changing its center so it moves a bit to the top. The problem: What I am noticing right now, is that I am unable to work with the Y coordinate. In order for it move vertically (remember it's on landscape) I need to work with the X. The problem is that when it's UIInterfaceOrientationLandscapeLeft I need to come with a negative X. And when it's UIInterfaceOrientationLandscapeRight I need to come with a positive X. So it seems that the X/Y Coordinate System is "glued" to the top left corner while in Portrait and when an orientation occurs, it's still there: What I have done So I have something like this: UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation]; NSInteger newX = 0.0f; if (orientation == UIInterfaceOrientationLandscapeLeft) { // Logic for calculating the negative X. } else { // Logic for calculating the positive X. } It works exactly like I want, but it seems a very fragile implementation. Am I missing something? Is this the expected behaviour?

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • GWT + Seam, cannot fetch scoped beans from gwt servlet in seam resource servlet.

    - by David Göransson
    Hello all I am trying to get session and conversation scoped beans to a gwt servlet in the seam resource servlet. I have a conversation scoped bean: @Name ("viewFormCopyAction") @Scope (ScopeType.CONVERSATION) public class ViewFormCopyAction {} and a session scoped bean: @Name ("authenticator") @Scope (ScopeType.SESSION) public class AuthenticatorAction {} There is a RemoteService interface: @RemoteServiceRelativePath ("strokesService") public interface StrokesService extends RemoteService { public Position getPosition (int conversationId); } with corresponding async interface: public interface StrokesServiceAsync extends RemoteService { public void getPosition (int conversationId, AsyncCallback callback); } and implementation: @Name ("com.web.actions.forms.gwt.client.StrokesService") @Scope (ScopeType.EVENT) public class StrokesServiceImpl implements StrokesService { @In Manager manager; @Override @WebRemote public Position getPosition (int conversationId) { manager.switchConversation( "" + conversationId ); ViewFormCopyAction vfca = (ViewFormCopyAction) Component.getInstance( "viewFormCopyAction" ); AuthenticatorAction aa = (AuthenticatorAction) Component.getInstance( "authenticator" ); return null; } } The gwt page is within an IFrame in a regular seam page and the conversationId is propagted with the src attribute of the IFrame. Both bean objects end up with only null values. Can anyone see anything wrong with the code? I know that I could use strings instead of the int, but never mind that at this point.

    Read the article

  • Behavior difference between UIView.subviews and [NSView subviews]

    - by zpasternack
    I have a piece of code in an iPhone app, which removes all subviews from a UIView subclass. It looks like this: NSArray* subViews = self.subviews; for( UIView *aView in subViews ) { [aView removeFromSuperview]; } This works fine. In fact, I never really gave it much thought until I tried nearly the same thing in a Mac OS X app (from an NSView subclass): NSArray* subViews = [self subviews]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } That totally doesn’t work. Specifically, at runtime, I get this: *** Collection <NSCFArray: 0x1005208a0> was mutated while being enumerated. I ended up doing it like so: NSArray* subViews = [[self subviews] copy]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } [subViews release]; That's fine. What’s bugging me, though, is why does it work on the iPhone? subviews is a copy property: @property(nonatomic,readonly,copy) NSArray *subviews; My first thought was, maybe @synthesize’d getters return a copy when the copy attribute is specified. The doc is clear on the semantics of copy for setters, but doesn’t appear to say either way for getters (or at least, it’s not apparent to me). And actually, doing a few tests of my own, this clearly does not seem to be the case. Which is good, I think returning a copy would be problematic, for a few reasons. So the question is: how does the above code work on the iPhone? NSView is clearly returning a pointer to the actual array of subviews, and perhaps UIView isn’t. Perhaps it’s simply an implementation detail of UIView, and I shouldn’t get worked up about it. Can anyone offer any insight?

    Read the article

  • On XP, how do I get the tooltip to appear above a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Edit: this appears to be an XP-only problem. Vista and Windows 7 work correctly. I'd still like to find a workaround to get the tooltip to appear above the form on XP. Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

  • Strange Effect with Overridden Properties and Reflection

    - by naacal
    I've come across a strange behaviour in .NET/Reflection and cannot find any solution/explanation for this: Class A { public string TestString { get; set; } } Class B : A { public override string TestString { get { return "x"; } } } Since properties are just pairs of functions (get_PropName(), set_PropName()) overriding only the "get" part should leave the "set" part as it is in the base class. And this is just what happens if you try to instanciate class B and assign a value to TestString, it uses the implementation of class A. But what happens if I look at the instantiated object of class B in reflection is this: PropertyInfo propInfo = b.GetType().GetProperty("TestString"); propInfo.CanRead ---> true propInfo.CanWrite ---> false(!) And if I try to invoke the setter from reflection with: propInfo.SetValue("test", b, null); I'll even get an ArgumentException with the following message: Property set method not found. Is this as expected? Because I don't seem to find a combination of BindingFlags for the GetProperty() method that returns me the property with a working get/set pair from reflection.

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • is it possible to write a program which prints its own source code utilizing a "sequence-generating-

    - by guest
    is it possible to write a program which prints its own source code utilizing a "sequence-generating-function"? what i call a sequence-generating-function is simply a function which returns a value out of a specific interval (i.e. printable ascii-charecters (32-126)). the point now is, that this generated sequence should be the programs own source-code. as you see, implementing a function which returns an arbitrary sequence is really trivial, but since the returned sequence must contain the implementation of the function itself it is a highly non-trivial task. this is how such a program (and its corresponding output) could look like #include <stdio.h> int fun(int x) { ins1; ins2; ins3; . . . return y; } int main(void) { int i; for ( i=0; i<size of the program; i++ ) { printf("%c", fun(i)); } return 0; } i personally think it is not possible, but since i don't know very much about the underlying matter i posted my thoughts here. i'm really looking forward to hear some opinions!

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • How to collect and inject all beans of a given type in Spring XML configuration

    - by GrzegorzOledzki
    One of the strongest accents of the Spring framework is the Dependency Injection concept. I understand one of the advices behind that is to separate general high-level mechanism from low-level details (as announced by Dependency Inversion Principle). Technically, that boils down to having a bean implementation to know as little as possible about a bean being injected as a dependency, e.g. public class PrintOutBean { private LogicBean logicBean; public void action() { System.out.println(logicBean.humanReadableDetails()); } //... } <bean class="PrintOutBean"> <property name="loginBean" ref="ShoppingCartBean"/> </bean> But what if I wanted to a have a high-level mechanism operating on multiple dependent beans? public class MenuManagementBean { private Collection<Option> options; public void printOut() { for (Option option:options) { // do something for option } //... } } I know one solution would be to use @Autowired annotation in the singleton bean, that is... @Autowired private Collection<Option> options; But doesn't it violate the separation principle? Why do I have to specify what dependents to take in the very same place I use them (i.e. MenuManagementBean class in my example)? Is there a way to inject collections of beans in the XML configuration like this (without any annotation in the MMB class)? <bean class="MenuManagementBean"> <property name="options"> <xxx:autowire by-type="MyOptionImpl"/> </property> </bean>

    Read the article

  • SIlverlight Navigate: how does it work? How would you implement in f# w/o VS wizards and helpers?

    - by akaphenom
    After a nights sleep the problem can be stated more accurately as I have a 100% f# / silverlight implementation and am looking to use the built in Navigation components. C# creates page.xaml and page.xaml.cs um - ok; but what is the relationship at a fundamental level? How would I go about doing this in f#? The applcuation is loaded in the default module, and I pull the XAML in and reference it from the application object. Do I need to create instances / references to the pages from within the application object? Or set up some other page management object with the proper name value pairs? When all the Help of VS is stripped away - what are we left with? original post (for those who may be reading replies) I have a 100% silverlight 3.0 / f# 2.0 application I am wrapping my brain around. I have the base application loading correctly - and now I want to add the naigation controls to it. My page is stored as an embedded resource - but the Frame.Navigate takes a URI. I know what I have is wrong but here it is: let nav : Frame = mainGrid ? mainFrame let url = "/page1.xaml" let uri = new System.Uri(url, System.UriKind.Relative) ; nav.Navigate uri Any thoughts?

    Read the article

  • Is there a way to load an existing connection string for Linq to SQL from an app.config file?

    - by Brian Surowiec
    I'm running into a really annoying problem with my Linq to SQL project. When I add everything in under the web project everything goes as expected and I can tell it to use my existing connection string stored in the web.config file and the Linq code pulls directly from the ConfigurationManager. This all turns ugly once I move the code into its own project. I’ve created an app.config file, put the connection string in there as it was in the web.config but when I try to add another table in the IDE keeps forcing me to either hardcode the connection string or creates a Settings file and puts it in there, which then adds a new entry into the app.config file with a new name. Is there a way keep my Linq code in its own project yet still refer back to my config file without the IDE continuously hardcoding the connection string or creating the Settings file? I’m converting part of my DAL over to use Linq to SQL so I’d like to use the existing connection string that our old code is using as well as keep the value in a common location, and one spot, instead of in a number of spots. Manually changing the mode to WebSettings instead of AppSettings works untill I try to add a new table, then it goes back to hardcoding the value or recreating the Settings file. I also tried to switch the project type to be a web project and then rename my app.config to web.config and then everything works as I’d like it to. I’m just not sure if there are any downfalls to keeping this as a web project since it really isn't one. The project only contains the Linq to SQL code and an implementation of my repository classes. My project layout looks like this Website -connectionString.config -web.config (refers to connectionString.config) Middle Tier -Business Logic -Repository Interfaces -etc. DAL -Linq to SQL code -Existing SPROC code -connectionString.config (linked from the web poject) -app.config (refers to connectionString.config)

    Read the article

  • Java: Tracking a user login session - Session EJBs vs HTTPSession

    - by bguiz
    If I want to keep track of a conversational state with each client using my web application, which is the better alternative - a Session Bean or a HTTP Session - to use? Using HTTP Session: //request is a variable of the class javax.servlet.http.HttpServletRequest //UserState is a POJO HttpSession session = request.getSession(true); UserState state = (UserState)(session.getAttribute("UserState")); if (state == null) { //create default value .. } String uid = state.getUID(); //now do things with the user id Using Session EJB: In the implementation of ServletContextListener registered as a Web Application Listener in WEB-INF/web.xml: //UserState NOT a POJO this this time, it is //the interface of the UserStateBean Stateful Session EJB @EJB private UserState userStateBean; public void contextInitialized(ServletContextEvent sce) { ServletContext servletContext = sce.getServletContext(); servletContext.setAttribute("UserState", userStateBean); ... In a JSP: public void jspInit() { UserState state = (UserState)(getServletContext().getAttribute("UserState")); ... } Elsewhere in the body of the same JSP: String uid = state.getUID(); //now do things with the user id It seems to me that the they are almost the same, with the main difference being that the UserState instance is being transported in the HttpRequest.HttpSession in the former, and in a ServletContext in the case of the latter. Which of the two methods is more robust, and why?

    Read the article

  • What is the simplest way to implement multithreading in c# to existing code

    - by Kaeso
    I have already implemented a functionnal application that parses 26 pages of html all at once to produce an xml file with data contained on the web pages. I would need to implement a thread so that this method can work in the background without causing my app to seems unresponsive. Secondly, I have another function that is decoupled from the first one which compares two xml files to produce a third one and then transform this third xml file to produce an html page using XSLT. This would have to be on a thread, where I can click Cancel to stop the thread whithout crashing the app. What is the easiest best way to do this using WPF forms in VS 2010 ? I have chosen to use the BackgroundWorker. BackgroundWorker implementation: public partial class MainWindow : Window { private BackgroundWorker bw = new BackgroundWorker(); public MainWindow() { InitializeComponent(); bw.WorkerReportsProgress = true; bw.DoWork += new DoWorkEventHandler(bw_DoWork); bw.RunWorkerCompleted += new RunWorkerCompletedEventHandler(backgroundWorker1_RunWorkerCompleted); } private void Window_Loaded(object sender, RoutedEventArgs e) { this.LoadFiles(); } private void btnCompare_Click(object sender, EventArgs e) { if (bw.IsBusy != true) { progressBar2.IsIndeterminate = true; // Start the asynchronous operation. bw.RunWorkerAsync(); } } private void bw_DoWork(object sender, DoWorkEventArgs e) { StatsProcessor proc = new StatsProcessor(); if (lstStatsBox1.SelectedItem != null) if (lstStatsBox2.SelectedItem != null) proc.CompareStats(lstStatsBox1.SelectedItem.ToString(), lstStatsBox2.SelectedItem.ToString()); } private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { progressBar2.IsIndeterminate = false; progressBar2.Value = 100; } I have started with the bgworker solution, but it seems that the bw_DoWork method is never called when btnCompare is clicked, I must be doing something wrong... I am new to threads.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >