Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 259/336 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • JSF - Creating an overlay for popup panels.

    - by Ben
    Hi, I've created an overlay that will popup whenever someone wants to upload a file to the system. The Gui looks like this (when the overlay is up) I have two problems with this: I attached a a4j:support object that, onclick, makes the overlay disappear. The problem with this is that when I click the upload button on the upload component, support catches the click event and closes the overlay with the upload component before I have the chance to finish the operation. I chose two different style classes. One for the overlay and one for the upload panel. But the styling of the overlay takes over the upload component and it becomes transparent as well. The implementation looks something like this: <h:panelgroup layout="block" styleClass="overlayClass"> <rich:fileUpload styleClass="uploadStyleClass"... /> <a4j:support event="onclick" action="#{mrBean.switchOverlayState}" reRender="..."/> </h:panelGroup> The CSS: .overlayClass { Opacity: 0.5; position: fixed; left: 0; right: 0; top: 0; bottom: 0; background: #000; } .uploadStyleClass { opacity: 1.0; ... } Thanks for the help!

    Read the article

  • std::string insert method has ambiguous overloads?

    - by sdg
    Environment: VS2005 C++ using STLPort 5.1.4. Compiling the following code snippet: std::string copied = "asdf"; char ch = 's'; copied.insert(0,1,ch); I receive an error: Error 1 error C2668: 'stlpx_std::basic_string<_CharT,_Traits,_Alloc>::insert' : ambiguous call to overloaded function It appears that the problem is the insert method call on the string object. The two defined overloads are void insert ( iterator p, size_t n, char c ); string& insert ( size_t pos1, size_t n, char c ); But given that STLPort uses a simple char* as its iterator, the literal zero in the insert method in my code is ambiguous. So while I can easily overcome the problem by hinting such as copied.insert(size_t(0),1,ch); My question is: is this overloading and possible ambiguity intentional in the specification? Or more likely an unintended side-effect of the specific STLPort implementation? (Note that the Microsoft-supplied STL does not have this problem as it has a class for the iterator, instead of a naked pointer)

    Read the article

  • iPhone Adding Controls (UIButton, UILabel etc.) on a Playing Video!

    - by Taimur Hamza
    I have been assigned an 'easy' task of adding a button over a playing video. I m terming it easy as i have got the sample code downloaded from Apple's sample code. http://rapidshare.com/files/393248642/MoviePlayer_iPhone.zip Anybody who wants to reply to my query and intends to help should download this project and run it otherwise it wudnt be easy to understand my problem. Thanks! And now the weird problem i am facing is: In the sample project developer has added the view ( UILabel and UIButton ) in the Appdelegate. And i want it other xib files not the App delegate . Fine i added a view 'myButtonABC' instead of 'My Overlay View'. And added this code in my xib file's implementation file. (void)viewDidLoad { TaimurAppDelegate *appDelegate = (TaimurAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate initAndPlayMovie:[self localMovieURL]]; NSArray *windows = [[UIApplication sharedApplication] windows]; if ([windows count] 1) { UIWindow *moviePlayerWindow = [[UIApplication sharedApplication] keyWindow]; [moviePlayerWindow addSubview:self.myABC]; } } Now my question is do i need to declare a UIWindow object in the header file here. As i am not working in the app delegate class. As i said previously i have to add this button over a video in other screen and not on the main screen. The third question is can anyone, which in fact is the most imp of all. In my view , if i am able to do this one my problem would be solved. As i have spend considerable amount of time on this task so far. The question is " How can i connect myABCButton (which is a view added to my xib file ) to the File's Owner. Thanks for your patience. Replies Appreciated ! Taimur

    Read the article

  • 'C++ object destroyed' in QComboBox descendant editor in delegate

    - by Max
    Hi, all! I have modified combobox to hold colors, using QtColorCombo (http://qt.nokia.com/products/appdev/add-on-products/catalog/4/Widgets/qtcolorcombobox) as howto for the 'more...' button implementation details. It works fine in C++ and in PyQt on linux, but I get 'underlying C++ object was destroyed' when use this control in PyQt on Windows. It seels like the error happens when: ... # in constructor: self.activated.connect(self._emitActivatedColor) ... def _emitActivatedColor(self, index): if self._colorDialogEnabled and index == self.colorCount(): print '!!!!!!!!! QtGui.QColorDialog.getColor()' c = QtGui.QColorDialog.getColor() # <----- :( delegate fires 'closeEditor' print '!!!!!!!!! ' + c.name() if c.isValid(): self._numUserColors += 1 #at the next line currentColor() tries to access C++ layer and fails self.addColor(c, self.currentColor().name()) self.setCurrentIndex(index) ... Maybe console output will help. I've overridden event() in editor and got: ... MouseButtonRelease FocusOut Leave Paint Enter Leave FocusIn !!!!!!!!! QtGui.QColorDialog.getColor() WindowBlocked Paint WindowDeactivate !!!!!!!!! 'CloseEditor' fires! Hide HideToParent FocusOut DeferredDelete !!!!!!!!! #6e6eff ... Can someone explain, why there is such a different behaviour in the different environments, and maybe give a workaround to fix this. Here is minimal example: http://docs.google.com/Doc?docid=0Aa0otNVdbWrrZDdxYnF3NV80Y20yam1nZHM&hl=en

    Read the article

  • iphone NSMutableArray loses objects at end of method

    - by Brodie4598
    Hello - in my app, an NSMutableArray is populated with an object in viewDidLoad (eventually there will be many objects but I'm just doing one til I get it working right). I also start a timer that starts a method that needs to access the NSMutableArray every few seconds. The NSMutableArray works fine in viewDidLoad, but as soon as that method is finished, it loses the object. myApp.h @interface MyApp : UIViewController { NSMutableArray *myMutableArray; NSTimer *timer; } @property (nonatomic, retain) NSMutableArray *myMutableArray; @property (nonatomic, retain) NSTimer *timer; @end myApp.m #import "MyApp.h" @implementation MyApp @synthesize myMutableArray; - (void) viewDidLoad { cycleTimer = [NSTimer scheduledTimerWithTimeInterval:4.0 target:self selector:@selector(newCycle) userInfo: nil repeats:YES]; MyObject *myCustomUIViewObject = [[MyObject alloc]init]; [myMutableArray addObject:myCustomUIViewObject]; [myCustomUIViewObject release]; NSLog(@"%i",[myMutableArray count]); /////outputs "1" } -(void) newCycle { NSLog(@"%i",[myMutableArray count]); /////outputs "0" ?? why is this?? }

    Read the article

  • Getting around IBActions limited scope

    - by Septih
    Hello, I have an NSCollectionView and the view is an NSBox with a label and an NSButton. I want a double click or a click of the NSButton to tell the controller to perform an action with the represented object of the NSCollectionViewItem. The Item View is has been subclassed, the code is as follows: #import <Cocoa/Cocoa.h> #import "WizardItem.h" @interface WizardItemView : NSBox { id delegate; IBOutlet NSCollectionViewItem * viewItem; WizardItem * wizardItem; } @property(readwrite,retain) WizardItem * wizardItem; @property(readwrite,retain) id delegate; -(IBAction)start:(id)sender; @end #import "WizardItemView.h" @implementation WizardItemView @synthesize wizardItem, delegate; -(void)awakeFromNib { [self bind:@"wizardItem" toObject:viewItem withKeyPath:@"representedObject" options:nil]; } -(void)mouseDown:(NSEvent *)event { [super mouseDown:event]; if([event clickCount] > 1) { [delegate performAction:[wizardItem action]]; } } -(IBAction)start:(id)sender { [delegate performAction:[wizardItem action]]; } @end The problem I've run into is that as an IBAction, the only things in the scope of -start are the things that have been bound in IB, so delegate and viewItem. This means that I cannot get at the represented object to send it to the delegate. Is there a way around this limited scope or a better way or getting hold of the represented object? Thanks.

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • What should I do if i have a factory method which requires different parameters for different implem

    - by Sam Holder
    I have an interface, IMessage and a class which have several methods for creating different types of message like so: class MessageService { IMessage TypeAMessage(param 1, param 2) IMessage TypeBMessage(param 1, param 2, param 3, param 4) IMessage TypeCMessage(param 1, param 2, param 3) IMessage TypeDMessage(param 1) } I don't want this class to do all the work for creating these messages so it simply delegates to a MessageCreatorFactory which produces an IMessageCreator depending on the type given (an enumeration based on the type of the message TypeA, TypeB, TypeC etc) interface IMessageCreator { IMessage Create(MessageParams params); } So I have 4 implementations of IMessageCreator: TypeAMessageCreator, TypeBMessageCreator, TypeCMessageCreator, TypeDMessageCreator I ok with this except for the fact that because each type requires different parameters I have had to create a MessageParams object which contains 4 properties for the 4 different params, but only some of them are used in each IMessageCreator. Is there an alternative to this? One other thought I had was to have a param array as the parameter in the Create emthod, but this seems even worse as you don't have any idea what the params are. Or to create several overloads of Create in the interface and have some of them throw an exception if they are not suitable for that particular implementation (ie you called a method which needs more params, so you should have called one of the other overloads.) Does this seem ok? Is there a better solution?

    Read the article

  • WCF REST adding data using POST or PUT 400 Bad Request

    - by user55474
    HI How do i add data using wcf rest architecture. I dont want to use the channelfactory to call my method. Something similar to the webrequest and webresponse used for GET. Something similar to the ajax WebServiceProxy restInvoke Or do i always have to use the Webchannelfactory implementation I am getting a 400 BAD request by using the following Dim url As String = "http://localhost:4475/Service.svc/Entity/Add" Dim req As WebRequest = WebRequest.Create(url) req.Method = "POST" req.ContentType = "application/xml; charset=utf-8" req.Timeout = 30000 req.Headers.Add("SOAPAction", url) Dim xEle As XElement xEle = <Entity xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>Entity1</Name> </Entity> Dim sXML As String = xEle .Value req.ContentLength = sXML.Length Dim sw As New System.IO.StreamWriter(req.GetRequestStream()) sw.Write(sXML) sw.Close() Dim res as HttpWebResponse = req.GetResponse() Sercice Contract is as follows <OperationContract()> _ <WebInvoke(Method:="PUT", UriTemplate:="Entity/Add")> _ Function AddEntity(ByVal e1 As Entity) DataContract is as follows <Serializable()> _ <DataContract()> _ Public Class Entity private m_Name as String <DataMember()> _ Public Property Name() As String Get Return m_Name End Get Set(ByVal value As String) m_Name = value End Set End Property End Class thanks

    Read the article

  • Concise C# code for gathering several properties with a non-null value into a collection?

    - by stakx
    A fairly basic problem for a change. Given a class such as this: public class X { public T A; public T B; public T C; ... // (other fields, properties, and methods are not of interest here) } I am looking for a concise way to code a method that will return all A, B, C, ... that are not null in an enumerable collection. (Assume that declaring these fields as an array is not an option.) public IEnumerable<T> GetAllNonNullAs(this X x) { // ? } The obvious implementation of this method would be: public IEnumerable<T> GetAllNonNullAs(this X x) { var resultSet = new List<T>(); if (x.A != null) resultSet.Add(x.A); if (x.B != null) resultSet.Add(x.B); if (x.C != null) resultSet.Add(x.C); ... return resultSet; } What's bothering me here in particular is that the code looks verbose and repetitive, and that I don't know the initial List capacity in advance. It's my hope that there is a more clever way, probably something involving the ?? operator? Any ideas?

    Read the article

  • Is this a bug in plist or Xcode?

    - by Pedro
    G'day All If you create a date item in the plist editor of Xcode or Apple's standalone plist editor you get something of the form <date>2010-05-29T10:30:00Z</date> which is a nice well formed ISO date at UTC (indicated by the "Z"). Because I'm in timezone UTC +10 when that's read into my app & then displayed I get 8:30 PM out, still good. However if that is a time in my timezone it should be <date>2010-05-29T10:30:00+10</date> (replacing "Z" with my timezone offset). All of my attempts at reading such dates into my iPhone app have had the plist rejected as if it is malformed & editing a plist with such a date in Apple's editors changed the "+10" to "Z" without adjusting the time. Do others think I'm correct in thinking this is a bug in either plist or Xcode? My feeling is that the implementation of ISO date & time in plist is incomplete. Cheers, Pedro :)

    Read the article

  • The new operator in C# isn't overriding base class member

    - by Dominic Zukiewicz
    I am confused as to why the new operator isn't working as I expected it to. Note: All classes below are defined in the same namespace, and in the same file. This class allows you to prefix any content written to the console with some provided text. public class ConsoleWriter { private string prefix; public ConsoleWriter(string prefix) { this.prefix = prefix; } public void Write(string text) { Console.WriteLine(String.Concat(prefix,text)); } } Here is a base class: public class BaseClass { protected static ConsoleWriter consoleWriter = new ConsoleWriter(""); public static void Write(string text) { consoleWriter.Write(text); } } Here is an implemented class: public class NewClass : BaseClass { protected new static ConsoleWriter consoleWriter = new ConsoleWriter("> "); } Now here's the code to execute this: class Program { static void Main(string[] args) { BaseClass.Write("Hello World!"); NewClass.Write("Hello World!"); Console.Read(); } } So I would expect the output to be Hello World! > Hello World! But the output is Hello World Hello World I do not understand why this is happening. Here is my thought process as to what is happening: The CLR calls the BaseClass.Write() method The CLR initialises the BaseClass.consoleWriter member. The method is called and executed with the BaseClass.consoleWriter variable Then The CLR calls the NewClass.Write() The CLR initialises the NewClass.consoleWriter object. The CLR sees that the implementation lies in BaseClass, but the method is inherited through The CLR executes the method locally (in NewClass) using the NewClass.consoleWriter variable I thought this is how the inheritance structure works? Please can someone help me understand why this is not working?

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Values are not returning from MY SQL database to my java class

    - by sam
    Hi, This is my Query DELIMITER $$ DROP PROCEDURE IF EXISTSdiscoverdb.getuser_info$$ # MySQL returned an empty result set (i.e. zero rows). `CREATE PROCEDURE discoverdb.getuser_info ( IN name VARCHAR(100), IN pass VARCHAR(100) ) BEGIN SELECT * FROM ad_user WHERE sLogin = name AND sPassHash=password(pass); END $$ # MySQL returned an empty result set (i.e. zero rows). DELIMITER ; This is my calling method public Authentication getAuthentication (String username,String password) { //TODO write your implementation code here: Authentication ack = new Authentication(); try{ String simpleProc = "{ call getuser_infosam(?,?)}"; java.sql.CallableStatement cs = con.prepareCall(simpleProc); cs.setString(1, username); cs.setString(2, password); java.sql.ResultSet rs = cs.executeQuery(); while (rs.next()) { System.out.println(rs.getString("sLogin")); System.out.println(rs.getString("sPassHash")); System.out.println(rs.getString("sForename")); System.out.println(rs.getString("sName")); System.out.println(rs.getString("company")); System.out.println(rs.getString("sEmail")); rs.close();} }catch ( Exception e) { e.printStackTrace(); System.out.print(e); } return ack; }

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • Safe and polymorphic toEnum

    - by jetxee
    I'd like to write a safe version of toEnum: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t A naive implementation: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t safeToEnum i = if (i >= fromEnum (minBound :: t)) && (i <= fromEnum (maxBound :: t)) then Just . toEnum $ i else Nothing main = do print $ (safeToEnum 1 :: Maybe Bool) print $ (safeToEnum 2 :: Maybe Bool) And it doesn't work: safeToEnum.hs:3:21: Could not deduce (Bounded t1) from the context () arising from a use of `minBound' at safeToEnum.hs:3:21-28 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(minBound :: t)' In the second argument of `(>=)', namely `fromEnum (minBound :: t)' In the first argument of `(&&)', namely `(i >= fromEnum (minBound :: t))' safeToEnum.hs:3:56: Could not deduce (Bounded t1) from the context () arising from a use of `maxBound' at safeToEnum.hs:3:56-63 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(maxBound :: t)' In the second argument of `(<=)', namely `fromEnum (maxBound :: t)' In the second argument of `(&&)', namely `(i <= fromEnum (maxBound :: t))' As well as I understand the message, the compiler does not recognize that minBound and maxBound should produce exactly the same type as in the result type of safeToEnum inspite of the explicit type declaration (:: t). Any idea how to fix it?

    Read the article

  • Rails: generating URLs for actions in JSON response

    - by Chris Butler
    In a view I am generating an HTML canvas of figures based on model data in an app. In the view I am preloading JSON model data in the page like this (to avoid an initial request back): <script type="text/javascript" charset="utf-8"> <% ActiveRecord::Base.include_root_in_json = false -%> var objects = <%= @objects.to_json(:include => :other_objects) %>; ... Based on mouse (or touch) interaction I want to redirect to other parts of my app that are model specific (such as view, edit, delete, etc.). Rather than hard code the URLs in my JavaScript I want to generate them from Rails (which means it always adapts the latest routes). It seems like I have one of three options: Add an empty attr to the model that the controller fills in with the appropriate URL (we don't want to use routes in the model) before the JSON is generated Generate custom JSON where I add the different URLs manually Generate the URL as a template from Rails and replace the IDs in JavaScript as appropriate I am starting to lean towards #1 for ease of implementation and maintainability. Are there any other options that I am missing? Is #1 not the best? Thanks! Chris

    Read the article

  • Easiest way to plot values as symbols in scatter plot?

    - by AllenH
    In an answer to an earlier question of mine regarding fixing the colorspace for scatter images of 4D data, Tom10 suggested plotting values as symbols in order to double-check my data. An excellent idea. I've run some similar demos in the past, but I can't for the life of me find the demo I remember being quite simple. So, what's the easiest way to plot numerical values as the symbol in a scatter plot instead of 'o' for example? Tom10 suggested plt.txt(x,y,value)- and that is the implementation used in a number of examples. I however wonder if there's an easy way to evaluate "value" from my array of numbers? Can one simply say: str(valuearray) ? Do you need a loop to evaluate the values for plotting as suggested in the matplotlib demo section for 3D text scatter plots? Their example produces: However, they're doing something fairly complex in evaluating the locations as well as changing text direction based on data. So, is there a cute way to plot x,y,C data (where C is a value often taken as the color in the plot data- but instead I wish to make the symbol)? Again, I think we have a fair answer to this- I just wonder if there's an easier way?

    Read the article

  • creating matrix with probabilities

    - by John Chan
    Hi all, I want to generate a matrix of NxN to test some code that I have where each row contains floats as the elements and has to add up to 1 (i.e. a row with a set of probabilities). Where it gets tricky is that I want to make sure that randomly some of the elements should be 0 (in fact most of the elements should be 0 except for some random ones to be the probabilities). I need the probabilities to be 1/m where m is the number of elements that are not 0 within a single row. I tried to think of ways to output this, but essentially I would need this stored in a C++ array. So even if I output to a file I would still have the issue of not having it in array as I need it. At the end of it all I need that array because I want to generate a Market Matrix file. I found an implementation in C++ to take an array and convert it to the market matrix file, so this is what I am basing my findings on. My input for the rest of the code takes in this market matrix file so I need that to be the primary form of output. The language does not matter, I just want to generate the file at the end (I found a way mmwrite and mmread in python as well) Please help, I am stuck and not really sure how to implement this.

    Read the article

  • java 6 web services share domain specific classes between server and client

    - by user173446
    Hi all, Context: Considering below defined Engine class being parameter of some webservice method. As we have both server and client in java we may have some benefits (???) in sharing Engine class between server and client ( i.e we may put in a common jar file to be added to both client and server classpath ) Some benefits would be : we keep specific operations like 'brushEngine' in same place build is faster as we do not need in our case to generate java code for client classes but to use them from the server build) if we later change server implementation for 'brushEngine' this is reflected automatically in client . Questions: How to share below detailed Engine class using java 6 tools ( i.e wsimport , wsgen etc )? Is there other tools for java that can achieve this sharing ? Is sharing a case that java 6 web services support is missing ? Can this case be reduced to other web service usage patterns? Thanks. Code: public class Engine { private String engineData; public String getData(){ return data; } public setData(String value){ this.data = value; } public void brushEngine(){ engineData = "BrushedEngine"+engineData; } }

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

  • C# Fun with Generics - Mutual Dependencies

    - by Kenneth Cochran
    As an experiment I'm trying to write a generic MVP framework. I started with: public interface IPresenter<TView> where TView: IView<IPresenter<... { TView View { get; set;} } public interface IView<TPresenter> where TPresenter:IPresenter<IView<... { TPresenter Presenter { get; set; } } Obviously this can't work because the types of TView and TPresenter can't be resolved. You'd be writing Type<Type<... forever. So my next attempt looked like this: public interface IView<T> where T:IPresenter { ... } public interface IView:IView<IPresenter> { } public interface IPresenter<TView> where TView: IView { ... } public interface IPresenter: IPresenter<IView> { ... } This actually compiles and you can even inherit from these interfaces like so: public class MyView : IView, IView<MyPresenter> { ... } public class MyPresenter : IPresenter, IPresenter<MyView> { ... } The problem is in the class definition you have to define any members declared in the generic type twice. Not ideal but it still compiles. The problem's start creeping up when you actually try to access the members of a Presenter from a View or vice versa. You get an Ambiguous reference when you try to compile. Is there any way to avoid this double implementation of a member when you inherit from both interfaces? Is it even possible to resolve two mutually dependent generic types at compile time?

    Read the article

  • iPhone: Using dispatch_after to mimick NSTimer

    - by Joseph Tura
    Don't know a whole lot about blocks. How would you go about mimicking a repeating NSTimer with dispatch_after? My problem is that I want to "pause" a timer when the app moves to the background, but subclassing NSTimer does not seem to work. I tried something which seems to work. I cannot judge its performance implications or whether it could be greatly optimized. Any input is welcome. #import "TimerWithPause.h" @implementation TimerWithPause @synthesize timeInterval; @synthesize userInfo; @synthesize invalid; @synthesize invocation; + (TimerWithPause *)scheduledTimerWithTimeInterval:(NSTimeInterval)aTimeInterval target:(id)aTarget selector:(SEL)aSelector userInfo:(id)aUserInfo repeats:(BOOL)aTimerRepeats { TimerWithPause *timer = [[[TimerWithPause alloc] init] autorelease]; timer.timeInterval = aTimeInterval; NSMethodSignature *signature = [[aTarget class] instanceMethodSignatureForSelector:aSelector]; NSInvocation *aInvocation = [NSInvocation invocationWithMethodSignature:signature]; [aInvocation setSelector:aSelector]; [aInvocation setTarget:aTarget]; [aInvocation setArgument:&timer atIndex:2]; timer.invocation = aInvocation; timer.userInfo = aUserInfo; if (!aTimerRepeats) { timer.invalid = YES; } [timer fireAfterDelay]; return timer; } - (void)fireAfterDelay { dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, self.timeInterval * NSEC_PER_SEC); dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_after(delay, queue, ^{ [invocation performSelectorOnMainThread:@selector(invoke) withObject:nil waitUntilDone:NO]; if (!invalid) { [self fireAfterDelay]; } }); } - (void)invalidate { invalid = YES; [invocation release]; invocation = nil; [userInfo release]; userInfo = nil; } - (void)dealloc { [self invalidate]; [super dealloc]; } @end

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • Strange inheritance behaviour in Objective-C

    - by Smikey
    Hi all, I've created a class called SelectableObject like so: #define kNumberKey @"Object" #define kNameKey @"Name" #define kThumbStringKey @"Thumb" #define kMainStringKey @"Main" #import <Foundation/Foundation.h> @interface SelectableObject : NSObject <NSCoding> { int number; NSString *name; NSString *thumbString; NSString *mainString; } @property (nonatomic, assign) int number; @property (nonatomic, retain) NSString *name; @property (nonatomic, retain) NSString *thumbString; @property (nonatomic, retain) NSString *mainString; @end So far so good. And the implementation section conforms to the NSCoding protocol as expected. HOWEVER, when I add a new class which inherits from this class, i.e. #import <Foundation/Foundation.h> #import "SelectableObject.h" @interface Pet : SelectableObject <NSCoding> { } @end I suddenly get the following compiler error in the Selectable object class! SelectableObject.h:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'interface' This makes no sense to me. Why is the interface declaration for the SelectableObject class suddenly broken? I also import it in a couple of other classes I've written... Any help would be very much appreciated. Thanks! Michael

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >