Search Results

Search found 19458 results on 779 pages for 'interface implementation'.

Page 178/779 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • chrome-like status bar in qt

    - by hasen j
    I'm not big on creating GUI's, and generally my philosophy is: I don't create them, or I make them as simple as possible (and convince myself that it's better for usability :) For my current project, I'm using Qt from Python (PyQt), and I want to start adding some GUI elements without cluttering the interface. My idea is to create these elements as sort of floating-shaped-widgets that only appear when necessary; pretty much like the status bar (and find bar) in chrome. Is there any standard api that enables creating this kind of interface?

    Read the article

  • ATL and types from scrrun.dll

    - by MaxFX
    Hello. I have interface in ATL project which must contains member with parameter of Scripting::IDictionary** but in MIDL file with description of my interface it's not possible because Scripting library is not presented in default library. I always have scrrun.tlb and trying to use it in MIDL but it's not work Code is here: midl-code

    Read the article

  • Multiple WCF windows services on the same box - endpoint configuration

    - by David Belanger
    Hi, I have 2 windows services installed on a machine with different service names, they install and start fine. What's happening is that they're both listening to the same endpoints and thus competing for messages. I've tried to change the baseAddress to be different for both services without success. Here's my service host config: <configuration> <appSettings> <add key="ServiceName" value="Service - Service Host 1"/> </appSettings> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="NoSecurityBinding"> <security mode="None"> <message establishSecurityContext="false"/> <transport clientCredentialType="None"/> </security> </binding> </wsHttpBinding> <basicHttpBinding> <binding name="NoSecurityBinding"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="Lib.Interface.Service" behaviorConfiguration="Lib.Interface.ServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8000/Service"/> </baseAddresses> </host> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="NoSecurityBinding" contract="Lib.Interface.IService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Lib.Interface.ServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy12"/> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> Any idea how I could set up the services (other than unique service names) so they're not conflicting with one another? Thanks.

    Read the article

  • Is there any boost-independent version of boost/tr1 shared_ptr

    - by Artyom
    I'm looking for independent implementation of boost/tr1 shared_ptr, weak_ptr and enable_shared_from_this. I need: Boost independent very small implementation of these features. I need support of only modern compilers like GCC-4.x, MSVC-2008, Intel not things like MSVC6 or gcc-3.3 I need it to be licensed under non-copyleft LGPL compatible license like Boost/Mit/3-clause BSD. So I can include it in my library. Note - it is quite hard to extract shared_ptr from boost, at least BCP gives about 324 files...

    Read the article

  • In Objective C, is there a way to call reference a typdef enum from another class?

    - by Adrian Harris Crowne
    It is my understanding that typedef enums are globally scoped, but if I created an enum outside of the @interface of RandomViewController.h, I can't figure out how to access it from OtherViewController.m. Is there a way to do this? So... "RandomViewController.h" #import <UIKit/UIKit.h> typedef enum { EnumOne, EnumTwo }EnumType; @interface RandomViewController : UIViewController { } and then... "OtherViewController.m" -(void) checkArray{ BOOL inArray = [randomViewController checkArray:(EnumType)EnumOne]; }

    Read the article

  • Mocking a Wcf ServiceContract

    - by Michael
    I want to mock a ServiceContract. The problem is that Moq (and Castle Dynamic-Proxy) copies the attributes from the interface to the dynamic proxy which Wcf don't like. Wcf sais: The ServiceContractAttribute should only be define on either the interface or the implementation, not both.

    Read the article

  • Possible to split one JAX-WS service across multiple source files?

    - by Rob S.
    Hi everyone, Is it possible to split a web service in to multiple classes and still provide a single path to the web service? I know this isn't possible because of the duplicate url-pattern values. It sort of illustrates where we're wanting to go :) <endpoint name="OneBigService" implementation="SmallImpl1" url-pattern="/OneBigService"/> <endpoint name="OneBigService" implementation="SmallImpl2" url-pattern="/OneBigService"/> Basically, how do avoid having one monolithic @WebService class? Thanks! Rob

    Read the article

  • Dependency Injection wcf

    - by Diego Dias
    I want inject a implementation of my Interface in the WCF but I want initialize my container of Dependency Injection in the Client of the WCF. So I can have a different implementation for each client of the my service. Help me please.

    Read the article

  • Scale down WebView content in iphone App

    - by TechFusion
    Hello, I am looking to display multiple web content in one main view. I have created different WebView and make it's height and width to fit all in one view size of Landscape mode. How to scale down web view content to show data in small size (as per WebView height and width) ? I have built application using Interface builder.The view is interface with one tab bar of tab bar controller. Thanks,

    Read the article

  • java interfaces

    - by Codenotguru
    write an interface with one method,two classes that implement the interface, and a main method with an array holding an instance from both classes.Using this array variable call the method in a foreach loop.This is a interview question in java anybody?

    Read the article

  • Python (Django). Store telnet connection

    - by Shamanu4
    Hello. I am programming web interface which communicates with cisco switches via telnet. I want to make such system which will be storing one telnet connection per switch and every script (web interface, cron jobs, etc.) will have access to it. This is needed to make a single query queue for each device and prevent huge cisco processor load caused by several concurent telnet connections. How do I can do this?

    Read the article

  • Core-Data Can't pass managedObjectContext from app delegate to view controller

    - by yahuie
    I'm making a core-data application that is view based. I can create the managedObjectContext and 'use' it in the app delegate, but can not pass it to the mainviewcontroller. Probably something simple, but I can't find the problem after looking for quite a while. The managedObjectModel is nil in the mainviewcontroller. The log and error is here: 2010-06-02 11:01:10.504 TestCoreData[404:207] Could not make MOC in MainViewController implementation. 2010-06-02 11:01:10.505 TestCoreData[404:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'Shelf'' 2010-06-02 11:01:10.506 TestCoreData[404:207] Stack: ( 30864475, 2452296969, 28852395, 12038, 3217218, 10258, 2700679, 2738614, 2726708, 2709119, 2736225, 38960473, 30649216, 30645320, 2702869, 2740143, 9704, 9558 ) (gdb) Code for app delegate here: -(BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { MainViewController *aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; self.mainViewController = aController; [aController release]; NSLog(@"before did finish launching"); if (self.managedObjectContext == nil) { NSLog(@"Could not make MOC in TestCoreDataAppDelegate implementation."); } //Just to see if I can access the here. NSFetchRequest* request = [[NSFetchRequest alloc] init]; NSEntityDescription* entity = [NSEntityDescription entityForName:@"Shelf" inManagedObjectContext:self.managedObjectContext]; [request setEntity:entity]; if (!entity) { NSLog(@"No Entity in TestCoreDataAppDelegate didfinishlaunching"); } NSLog(@"passed did finish launching"); NSManagedObjectContext *context = [self managedObjectContext]; self.mainViewController.view.frame = [UIScreen mainScreen].applicationFrame; self.mainViewController.managedObjectContext = context; [context release]; [window addSubview:[mainViewController view]]; [window makeKeyAndVisible]; return YES; } Code in MainViewController here: @implementation MainViewController @synthesize managedObjectContext; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if ((self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; if (self.managedObjectContext == nil) { NSLog(@"Could not make MOC in MainViewController implementation."); } NSFetchRequest* request = [[NSFetchRequest alloc] init]; NSEntityDescription* entity = [NSEntityDescription entityForName:@"Shelf" inManagedObjectContext:self.managedObjectContext]; [request setEntity:entity]; } Thanks.

    Read the article

  • Bing image search like endless scroll in Flex

    - by Tee
    Hi, I want to know if there is any existing implementation of 'endless scroll' in Flex ? I searched online but I could only find implementation in Javascript. What could be the pros/cons of having an 'endless scroll' component in Flex, which would be more efficient JS or Flex. Thanks

    Read the article

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • How do I recover from an unchecked exception?

    - by erickson
    Unchecked exceptions are alright if you want to handle every failure the same way, for example by logging it and skipping to the next request, displaying a message to the user and handling the next event, etc. If this is my use case, all I have to do is catch some general exception type at a high level in my system, and handle everything the same way. But I want to recover from specific problems, and I'm not sure the best way to approach it with unchecked exceptions. Here is a concrete example. Suppose I have a web application, built using Struts2 and Hibernate. If an exception bubbles up to my "action", I log it, and display a pretty apology to the user. But one of the functions of my web application is creating new user accounts, that require a unique user name. If a user picks a name that already exists, Hibernate throws an org.hibernate.exception.ConstraintViolationException (an unchecked exception) down in the guts of my system. I'd really like to recover from this particular problem by asking the user to choose another user name, rather than giving them the same "we logged your problem but for now you're hosed" message. Here are a few points to consider: There a lot of people creating accounts simultaneously. I don't want to lock the whole user table between a "SELECT" to see if the name exists and an "INSERT" if it doesn't. In the case of relational databases, there might be some tricks to work around this, but what I'm really interested in is the general case where pre-checking for an exception won't work because of a fundamental race condition. Same thing could apply to looking for a file on the file system, etc. Given my CTO's propensity for drive-by management induced by reading technology columns in "Inc.", I need a layer of indirection around the persistence mechanism so that I can throw out Hibernate and use Kodo, or whatever, without changing anything except the lowest layer of persistence code. As a matter of fact, there are several such layers of abstraction in my system. How can I prevent them from leaking in spite of unchecked exceptions? One of the declaimed weaknesses of checked exceptions is having to "handle" them in every call on the stack—either by declaring that a calling method throws them, or by catching them and handling them. Handling them often means wrapping them in another checked exception of a type appropriate to the level of abstraction. So, for example, in checked-exception land, a file-system–based implementation of my UserRegistry might catch IOException, while a database implementation would catch SQLException, but both would throw a UserNotFoundException that hides the underlying implementation. How do I take advantage of unchecked exceptions, sparing myself of the burden of this wrapping at each layer, without leaking implementation details?

    Read the article

  • Lollipop notation in Rational Software Architect

    - by mfrank
    Hi, I am using IBM Rational® Software Architect™ for WebSphere® Software Version: 7.5.2. In a component diagram I would really like to use the lollipop notation and not the stereotyped interface notation for a provided interface part. Any tips if this is possible in RSA? BR /M

    Read the article

  • Ninject with Object Initializers and LINQ

    - by Alexander Kahoun
    I'm new to Ninject so what I'm trying may not even be possible but I wanted to ask. I free-handed the below so there may be typos. Let's say I have an interface: public interface IPerson { string FirstName { get; set; } string LastName { get; set;} string GetFullName(); } And a concrete: public class Person : IPerson { public string FirstName { get; set; } public string LastName { get; set; } public string GetFullName() { return String.Concat(FirstName, " ", LastName); } } What I'm used to doing is something like this when I'm retrieving data from arrays or xml: public IEnumerable<IPerson> GetPeople(string xml) { XElement persons = XElement.Parse(xml); IEnumerable<IPerson> people = ( from person in persons.Descendants("person") select new Person { FirstName = person.Attribute("FName").Value, LastName = person.Attribute("LName").Value }).ToList(); return people; } I don't want to tightly couple the concrete to the interface in this manner. I haven't been able to find any information in regards to using Ninject with LINQ to Objects or with object initializers. I may be looking in the wrong places, but I've been searching for a day now with no luck at all. I was contemplating putting the kernel into an singleton instance and seeing if that would work, but I'm not sure that it will plus I've heard that passing your kernel around is a bad thing. I'm trying to implement this in a class library currently. If this is not possible, does anyone have any examples or suggestions as to what the best practice is in this case? Thanks in advance for the help. EDIT: Based on some of the answers I feel I should clarify. Yes, the example above appears short lived but it was simply an example of one piece that I was trying to do. Let's give a bigger picture. Say instead of XML I am gathering all my data through a 3rd party web service and I'm creating an interface for it, the data could be a defined object in the wsdl or it could sometimes be an xml string. IPerson could be used for both the Person object and a User object. I will be doing this inside of a separate class library, because it needs to be portable and will be used in other projects, and handing it to an MVC3 Web Application and the objects will be used in javascript as well. I appreciate all the input so far.

    Read the article

  • Does anyone know what happens if you do not implement iequtalable when using generic collections?

    - by ChloeRadshaw
    I asked a question here : http://stackoverflow.com/questions/2476793/when-to-use-iequatable-and-why about using Iequatable. From the msdn: The IEquatable(T) interface is used by generic collection objects such as Dictionary(TKey, TValue), List(T), and LinkedList(T) when testing for equality in such methods as Contains, IndexOf, LastIndexOf, and Remove. If you dont implement that interface what exactly happens?? Exception / default object equals / ref equals?

    Read the article

  • Is there a Base64Stream for .NET? where?

    - by Cheeso
    If I want to produce a Base64-encoded output, how would I do that in .NET? I know that since .NET 2.0, there is the ICryptoTransform interface, and the ToBase64Transform() and FromBase64Transform() implementations of that interface. But those classes are embedded into the System.Security namespace, and require the use of a TransformBlock, TransformFinalBlock, and so on. Is there an easier way to base64 encode a stream of data in .NET?

    Read the article

  • What are the licence restrictions for the RxTx Library

    - by Azder
    I want to make an Application that uses RxTx version 2.2pre2 to work with Serial Ports. What are the Licence restrictions, since it is an "LGPL v 2.1 + Linking Over Controlled Interface" licenced library if I don't use the Sun's javax.comm.* interface, but the RxTx's own gnu.io.* when importing into Java Files?

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • ExecutorService that interrupts tasks after a timeout

    - by scompt.com
    I'm looking for an ExecutorService implementation that can be provided with a timeout. Tasks that are submitted to the ExecutorService are interrupted if they take longer than the timeout to run. Implementing such a beast isn't such a difficult task, but I'm wondering if anybody knows of an existing implementation.

    Read the article

  • Can IMAPI2 burn files with the size > 4Gb?

    - by Sergey Skoblikov
    IMAPI2 interface IFileSystem uses COM IStream interfaces to represent file data. There is AddTree method that adds specified directory contents to IFileSystem. So AddTree must create IStream's in the process. I wonder what implementation of IStream it uses? If it uses the standard OLE implementation than we have a nasty problem because OLE streams doesn't support files bigger than 4Gb. Can anyone shed some light on this issue?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >