Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 259/336 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • forward invocation, by hand vs magically?

    - by John Smith
    I have the following two class: //file FruitTree.h @interface FruitTree : NSObject { Fruit * f; Leaf * l; } @end //file FruitTree.m @implementation FruitTree //here I get the number of seeds from the object f @end //file Fruit @interface Fruit : NSObject { int seeds; } -(int) countfruitseeds; @end My question is at the point of how I request the number of seeds from f. I have two choices. Either: Since I know f I can explicitly call it, i.e. I implement the method -(int) countfruitseeds { return [f countfruitseeds]; } Or: I can just use forwardInvocation: - (NSMethodSignature *)methodSignatureForSelector:(SEL)selector { // does the delegate respond to this selector? if ([f respondsToSelector:selector]) return [f methodSignatureForSelector:selector]; else if ([l respondsToSelector:selector]) return [l methodSignatureForSelector:selector]; else return [super methodSignatureForSelector: selector]; } - (void)forwardInvocation:(NSInvocation *)invocation { [invocation invokeWithTarget:f]; } (Note this is only a toy example to ask my question. My real classes have lots of methods, which is why I am asking.) Which is the better/faster method?

    Read the article

  • Rails: generating URLs for actions in JSON response

    - by Chris Butler
    In a view I am generating an HTML canvas of figures based on model data in an app. In the view I am preloading JSON model data in the page like this (to avoid an initial request back): <script type="text/javascript" charset="utf-8"> <% ActiveRecord::Base.include_root_in_json = false -%> var objects = <%= @objects.to_json(:include => :other_objects) %>; ... Based on mouse (or touch) interaction I want to redirect to other parts of my app that are model specific (such as view, edit, delete, etc.). Rather than hard code the URLs in my JavaScript I want to generate them from Rails (which means it always adapts the latest routes). It seems like I have one of three options: Add an empty attr to the model that the controller fills in with the appropriate URL (we don't want to use routes in the model) before the JSON is generated Generate custom JSON where I add the different URLs manually Generate the URL as a template from Rails and replace the IDs in JavaScript as appropriate I am starting to lean towards #1 for ease of implementation and maintainability. Are there any other options that I am missing? Is #1 not the best? Thanks! Chris

    Read the article

  • Creating futures using Apple's GCD

    - by jer
    I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such: Communication happens between actors using messages Multicast communication only (one actor to many actors) Senders and receivers are decoupled from one another using a blackboard where messages are pushed to. Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard. I'm trying to implement futures in the language right now, so I've created a new type which holds some information: A group of its own The value being 'returned' However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work. The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message. When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical. I'm wondering if anyone can think of a way around this?

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • The new operator in C# isn't overriding base class member

    - by Dominic Zukiewicz
    I am confused as to why the new operator isn't working as I expected it to. Note: All classes below are defined in the same namespace, and in the same file. This class allows you to prefix any content written to the console with some provided text. public class ConsoleWriter { private string prefix; public ConsoleWriter(string prefix) { this.prefix = prefix; } public void Write(string text) { Console.WriteLine(String.Concat(prefix,text)); } } Here is a base class: public class BaseClass { protected static ConsoleWriter consoleWriter = new ConsoleWriter(""); public static void Write(string text) { consoleWriter.Write(text); } } Here is an implemented class: public class NewClass : BaseClass { protected new static ConsoleWriter consoleWriter = new ConsoleWriter("> "); } Now here's the code to execute this: class Program { static void Main(string[] args) { BaseClass.Write("Hello World!"); NewClass.Write("Hello World!"); Console.Read(); } } So I would expect the output to be Hello World! > Hello World! But the output is Hello World Hello World I do not understand why this is happening. Here is my thought process as to what is happening: The CLR calls the BaseClass.Write() method The CLR initialises the BaseClass.consoleWriter member. The method is called and executed with the BaseClass.consoleWriter variable Then The CLR calls the NewClass.Write() The CLR initialises the NewClass.consoleWriter object. The CLR sees that the implementation lies in BaseClass, but the method is inherited through The CLR executes the method locally (in NewClass) using the NewClass.consoleWriter variable I thought this is how the inheritance structure works? Please can someone help me understand why this is not working?

    Read the article

  • What should I do if i have a factory method which requires different parameters for different implem

    - by Sam Holder
    I have an interface, IMessage and a class which have several methods for creating different types of message like so: class MessageService { IMessage TypeAMessage(param 1, param 2) IMessage TypeBMessage(param 1, param 2, param 3, param 4) IMessage TypeCMessage(param 1, param 2, param 3) IMessage TypeDMessage(param 1) } I don't want this class to do all the work for creating these messages so it simply delegates to a MessageCreatorFactory which produces an IMessageCreator depending on the type given (an enumeration based on the type of the message TypeA, TypeB, TypeC etc) interface IMessageCreator { IMessage Create(MessageParams params); } So I have 4 implementations of IMessageCreator: TypeAMessageCreator, TypeBMessageCreator, TypeCMessageCreator, TypeDMessageCreator I ok with this except for the fact that because each type requires different parameters I have had to create a MessageParams object which contains 4 properties for the 4 different params, but only some of them are used in each IMessageCreator. Is there an alternative to this? One other thought I had was to have a param array as the parameter in the Create emthod, but this seems even worse as you don't have any idea what the params are. Or to create several overloads of Create in the interface and have some of them throw an exception if they are not suitable for that particular implementation (ie you called a method which needs more params, so you should have called one of the other overloads.) Does this seem ok? Is there a better solution?

    Read the article

  • Concise C# code for gathering several properties with a non-null value into a collection?

    - by stakx
    A fairly basic problem for a change. Given a class such as this: public class X { public T A; public T B; public T C; ... // (other fields, properties, and methods are not of interest here) } I am looking for a concise way to code a method that will return all A, B, C, ... that are not null in an enumerable collection. (Assume that declaring these fields as an array is not an option.) public IEnumerable<T> GetAllNonNullAs(this X x) { // ? } The obvious implementation of this method would be: public IEnumerable<T> GetAllNonNullAs(this X x) { var resultSet = new List<T>(); if (x.A != null) resultSet.Add(x.A); if (x.B != null) resultSet.Add(x.B); if (x.C != null) resultSet.Add(x.C); ... return resultSet; } What's bothering me here in particular is that the code looks verbose and repetitive, and that I don't know the initial List capacity in advance. It's my hope that there is a more clever way, probably something involving the ?? operator? Any ideas?

    Read the article

  • Getting around IBActions limited scope

    - by Septih
    Hello, I have an NSCollectionView and the view is an NSBox with a label and an NSButton. I want a double click or a click of the NSButton to tell the controller to perform an action with the represented object of the NSCollectionViewItem. The Item View is has been subclassed, the code is as follows: #import <Cocoa/Cocoa.h> #import "WizardItem.h" @interface WizardItemView : NSBox { id delegate; IBOutlet NSCollectionViewItem * viewItem; WizardItem * wizardItem; } @property(readwrite,retain) WizardItem * wizardItem; @property(readwrite,retain) id delegate; -(IBAction)start:(id)sender; @end #import "WizardItemView.h" @implementation WizardItemView @synthesize wizardItem, delegate; -(void)awakeFromNib { [self bind:@"wizardItem" toObject:viewItem withKeyPath:@"representedObject" options:nil]; } -(void)mouseDown:(NSEvent *)event { [super mouseDown:event]; if([event clickCount] > 1) { [delegate performAction:[wizardItem action]]; } } -(IBAction)start:(id)sender { [delegate performAction:[wizardItem action]]; } @end The problem I've run into is that as an IBAction, the only things in the scope of -start are the things that have been bound in IB, so delegate and viewItem. This means that I cannot get at the represented object to send it to the delegate. Is there a way around this limited scope or a better way or getting hold of the represented object? Thanks.

    Read the article

  • iPhone Adding Controls (UIButton, UILabel etc.) on a Playing Video!

    - by Taimur Hamza
    I have been assigned an 'easy' task of adding a button over a playing video. I m terming it easy as i have got the sample code downloaded from Apple's sample code. http://rapidshare.com/files/393248642/MoviePlayer_iPhone.zip Anybody who wants to reply to my query and intends to help should download this project and run it otherwise it wudnt be easy to understand my problem. Thanks! And now the weird problem i am facing is: In the sample project developer has added the view ( UILabel and UIButton ) in the Appdelegate. And i want it other xib files not the App delegate . Fine i added a view 'myButtonABC' instead of 'My Overlay View'. And added this code in my xib file's implementation file. (void)viewDidLoad { TaimurAppDelegate *appDelegate = (TaimurAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate initAndPlayMovie:[self localMovieURL]]; NSArray *windows = [[UIApplication sharedApplication] windows]; if ([windows count] 1) { UIWindow *moviePlayerWindow = [[UIApplication sharedApplication] keyWindow]; [moviePlayerWindow addSubview:self.myABC]; } } Now my question is do i need to declare a UIWindow object in the header file here. As i am not working in the app delegate class. As i said previously i have to add this button over a video in other screen and not on the main screen. The third question is can anyone, which in fact is the most imp of all. In my view , if i am able to do this one my problem would be solved. As i have spend considerable amount of time on this task so far. The question is " How can i connect myABCButton (which is a view added to my xib file ) to the File's Owner. Thanks for your patience. Replies Appreciated ! Taimur

    Read the article

  • iPhone: Using dispatch_after to mimick NSTimer

    - by Joseph Tura
    Don't know a whole lot about blocks. How would you go about mimicking a repeating NSTimer with dispatch_after? My problem is that I want to "pause" a timer when the app moves to the background, but subclassing NSTimer does not seem to work. I tried something which seems to work. I cannot judge its performance implications or whether it could be greatly optimized. Any input is welcome. #import "TimerWithPause.h" @implementation TimerWithPause @synthesize timeInterval; @synthesize userInfo; @synthesize invalid; @synthesize invocation; + (TimerWithPause *)scheduledTimerWithTimeInterval:(NSTimeInterval)aTimeInterval target:(id)aTarget selector:(SEL)aSelector userInfo:(id)aUserInfo repeats:(BOOL)aTimerRepeats { TimerWithPause *timer = [[[TimerWithPause alloc] init] autorelease]; timer.timeInterval = aTimeInterval; NSMethodSignature *signature = [[aTarget class] instanceMethodSignatureForSelector:aSelector]; NSInvocation *aInvocation = [NSInvocation invocationWithMethodSignature:signature]; [aInvocation setSelector:aSelector]; [aInvocation setTarget:aTarget]; [aInvocation setArgument:&timer atIndex:2]; timer.invocation = aInvocation; timer.userInfo = aUserInfo; if (!aTimerRepeats) { timer.invalid = YES; } [timer fireAfterDelay]; return timer; } - (void)fireAfterDelay { dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, self.timeInterval * NSEC_PER_SEC); dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_after(delay, queue, ^{ [invocation performSelectorOnMainThread:@selector(invoke) withObject:nil waitUntilDone:NO]; if (!invalid) { [self fireAfterDelay]; } }); } - (void)invalidate { invalid = YES; [invocation release]; invocation = nil; [userInfo release]; userInfo = nil; } - (void)dealloc { [self invalidate]; [super dealloc]; } @end

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • Plotting an Arc in Discrete Steps

    - by phobos51594
    Good afternoon, Background My question relates to the plotting of an arbitrary arc in space using discrete steps. It is unique, however, in that I am not drawing to a canvas in the typical sense. The firmware I am designing is for a gcode interpreter for a CNC mill that will translate commands into stepper motor movements. Now, I have already found a similar question on this very site, but the methodology suggested (Bresenham's Algorithm) appears to be incompatable for moving an object in space, as it only relies on the calculation of one octant of a circle which is then mirrored about the remaining axes of symmetry. Furthermore, the prescribed method of calculation an arc between two arbitrary angles relies on trigonometry (I am implementing on a microcontroller and would like to avoid costly trig functions, if possible) and simply not taking the steps that are out of the range. Finally, the algorithm only is designed to work in one rotational direction (e.g. counterclockwise). Question So, on to the actual question: Does anyone know of a general-purpose algorithm that can be used to "draw" an arbitrary arc in discrete steps while still giving respect to angular direction (CW / CCW)? The final implementation will be done in C, but the language for the purpose of the question is irrelevant. Thank you in advance. References S.O post on drawing a simple circle using Bresenham's Algorithm: "Drawing" an arc in discrete x-y steps Wiki page describing Bresenham's Algorithm for a circle http://en.wikipedia.org/wiki/Midpoint_circle_algorithm Gcode instructions to be implemented (see. G2 and G3) http://linuxcnc.org/docs/html/gcode.html

    Read the article

  • Safe and polymorphic toEnum

    - by jetxee
    I'd like to write a safe version of toEnum: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t A naive implementation: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t safeToEnum i = if (i >= fromEnum (minBound :: t)) && (i <= fromEnum (maxBound :: t)) then Just . toEnum $ i else Nothing main = do print $ (safeToEnum 1 :: Maybe Bool) print $ (safeToEnum 2 :: Maybe Bool) And it doesn't work: safeToEnum.hs:3:21: Could not deduce (Bounded t1) from the context () arising from a use of `minBound' at safeToEnum.hs:3:21-28 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(minBound :: t)' In the second argument of `(>=)', namely `fromEnum (minBound :: t)' In the first argument of `(&&)', namely `(i >= fromEnum (minBound :: t))' safeToEnum.hs:3:56: Could not deduce (Bounded t1) from the context () arising from a use of `maxBound' at safeToEnum.hs:3:56-63 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(maxBound :: t)' In the second argument of `(<=)', namely `fromEnum (maxBound :: t)' In the second argument of `(&&)', namely `(i <= fromEnum (maxBound :: t))' As well as I understand the message, the compiler does not recognize that minBound and maxBound should produce exactly the same type as in the result type of safeToEnum inspite of the explicit type declaration (:: t). Any idea how to fix it?

    Read the article

  • Is this a bug in plist or Xcode?

    - by Pedro
    G'day All If you create a date item in the plist editor of Xcode or Apple's standalone plist editor you get something of the form <date>2010-05-29T10:30:00Z</date> which is a nice well formed ISO date at UTC (indicated by the "Z"). Because I'm in timezone UTC +10 when that's read into my app & then displayed I get 8:30 PM out, still good. However if that is a time in my timezone it should be <date>2010-05-29T10:30:00+10</date> (replacing "Z" with my timezone offset). All of my attempts at reading such dates into my iPhone app have had the plist rejected as if it is malformed & editing a plist with such a date in Apple's editors changed the "+10" to "Z" without adjusting the time. Do others think I'm correct in thinking this is a bug in either plist or Xcode? My feeling is that the implementation of ISO date & time in plist is incomplete. Cheers, Pedro :)

    Read the article

  • java 6 web services share domain specific classes between server and client

    - by user173446
    Hi all, Context: Considering below defined Engine class being parameter of some webservice method. As we have both server and client in java we may have some benefits (???) in sharing Engine class between server and client ( i.e we may put in a common jar file to be added to both client and server classpath ) Some benefits would be : we keep specific operations like 'brushEngine' in same place build is faster as we do not need in our case to generate java code for client classes but to use them from the server build) if we later change server implementation for 'brushEngine' this is reflected automatically in client . Questions: How to share below detailed Engine class using java 6 tools ( i.e wsimport , wsgen etc )? Is there other tools for java that can achieve this sharing ? Is sharing a case that java 6 web services support is missing ? Can this case be reduced to other web service usage patterns? Thanks. Code: public class Engine { private String engineData; public String getData(){ return data; } public setData(String value){ this.data = value; } public void brushEngine(){ engineData = "BrushedEngine"+engineData; } }

    Read the article

  • WCF REST adding data using POST or PUT 400 Bad Request

    - by user55474
    HI How do i add data using wcf rest architecture. I dont want to use the channelfactory to call my method. Something similar to the webrequest and webresponse used for GET. Something similar to the ajax WebServiceProxy restInvoke Or do i always have to use the Webchannelfactory implementation I am getting a 400 BAD request by using the following Dim url As String = "http://localhost:4475/Service.svc/Entity/Add" Dim req As WebRequest = WebRequest.Create(url) req.Method = "POST" req.ContentType = "application/xml; charset=utf-8" req.Timeout = 30000 req.Headers.Add("SOAPAction", url) Dim xEle As XElement xEle = <Entity xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>Entity1</Name> </Entity> Dim sXML As String = xEle .Value req.ContentLength = sXML.Length Dim sw As New System.IO.StreamWriter(req.GetRequestStream()) sw.Write(sXML) sw.Close() Dim res as HttpWebResponse = req.GetResponse() Sercice Contract is as follows <OperationContract()> _ <WebInvoke(Method:="PUT", UriTemplate:="Entity/Add")> _ Function AddEntity(ByVal e1 As Entity) DataContract is as follows <Serializable()> _ <DataContract()> _ Public Class Entity private m_Name as String <DataMember()> _ Public Property Name() As String Get Return m_Name End Get Set(ByVal value As String) m_Name = value End Set End Property End Class thanks

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • Visual C++ overrides/mock objects for unit testing?

    - by Mark
    When I'm running unit tests, I want to be able to "stub out" or create a mock object, but I'm running into DLL Hell. For example: There are two DLL libraries built: A.dll and B.dll -- Classes in A.dll have calls to classes in B.dll so when A.dll was built, the link line was using B.lib for the defintions. My test driver (Foo.exe) is testing classes in A.dll, so it links against A.lib. However, I want to "stub out" some of the calls A.dll makes to B.dll with simple versions (return basic value, no DB look up, etc). I can't build an Override.dll that just overrides the needed methods (not entire classes) and replace B.dll because Foo.exe will A) complain that B.dll is missing if I just remove it and put Override.dll in it's place or B) if I rename Override.dll to B.dll, Foo.exe complains that there are unresolved symbols because Override.dll is not a complete implementation of B.dll. Is there a way to do this? Is there a way to statically link Foo.exe with A.lib, B.lib and Override.lib such that it will work without having to completely rebuild A.lib and B.lib to remove the __delcspec(dllexport)? Is there another option?

    Read the article

  • 'C++ object destroyed' in QComboBox descendant editor in delegate

    - by Max
    Hi, all! I have modified combobox to hold colors, using QtColorCombo (http://qt.nokia.com/products/appdev/add-on-products/catalog/4/Widgets/qtcolorcombobox) as howto for the 'more...' button implementation details. It works fine in C++ and in PyQt on linux, but I get 'underlying C++ object was destroyed' when use this control in PyQt on Windows. It seels like the error happens when: ... # in constructor: self.activated.connect(self._emitActivatedColor) ... def _emitActivatedColor(self, index): if self._colorDialogEnabled and index == self.colorCount(): print '!!!!!!!!! QtGui.QColorDialog.getColor()' c = QtGui.QColorDialog.getColor() # <----- :( delegate fires 'closeEditor' print '!!!!!!!!! ' + c.name() if c.isValid(): self._numUserColors += 1 #at the next line currentColor() tries to access C++ layer and fails self.addColor(c, self.currentColor().name()) self.setCurrentIndex(index) ... Maybe console output will help. I've overridden event() in editor and got: ... MouseButtonRelease FocusOut Leave Paint Enter Leave FocusIn !!!!!!!!! QtGui.QColorDialog.getColor() WindowBlocked Paint WindowDeactivate !!!!!!!!! 'CloseEditor' fires! Hide HideToParent FocusOut DeferredDelete !!!!!!!!! #6e6eff ... Can someone explain, why there is such a different behaviour in the different environments, and maybe give a workaround to fix this. Here is minimal example: http://docs.google.com/Doc?docid=0Aa0otNVdbWrrZDdxYnF3NV80Y20yam1nZHM&hl=en

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Using boost asio for pub/sub style tcp in a game loop

    - by unohoo
    I have been reading through the boost asio documentation for a couple of hours now, and while I think the documentation is really great, I am still left a bit confused on how to implement the system that I need. I have to stream info, from a game engine, to a list of computers over tcp. One snag is that, unlike traditional pub/sub, the computer that does the distribution of info is actually the computer that has to connect to the subscribers as well (instead of the subscribers registering with the publisher). This is done via a config file - a list of ip's/ports along with the data that they each require. The subscribers listen, but do not know the ip of the publisher. (As a side note, I'm quite new to network programming, so maybe I'm missing something .. but it's strange that I do not find much information regarding this style of "inverted" client-server model..) I am looking for suggestions for the implementation of such a system using boost asio. Of course I have to integrate the networking into an already existing engine, so with regards to that: What would be a good way to handle messages being sent to multiple computers every frame? Use async_write, call io_service.run and then reset every frame? Would having io_service.run have its own thread be better? Or should I just use threads and use blocking writes?

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >