Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 266/336 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • What are major differences between C# and Java?

    - by enba
    I just want to clarify one thing. This is not a question on which one is better, that part I leave to someone else to discuss. I don't care about it. I've been asked this question on my job interview and I thought it might be useful to learn a bit more. These are the ones I could come up with: Java is "platform independent". Well nowadays you could say there is the Mono project so C# could be considered too but I believe it is a bit exaggerating. Why? Well, when a new release of Java is done it is simultaneously available on all platforms it supports, on the other hand how many features of C# 3.0 are still missing in the Mono implementation? Or is it really CLR vs. JRE that we should compare here? Java doesn't support events and delegates. As far as I know. In Java all methods are virtual Development tools: I believe there isn't such a tool yet as Visual Studio. Especially if you've worked with team editions you'll know what I mean. Please add others you think are relevant. Update: Just popped up my mind, Java doesn't have something like custom attributes on classes, methods etc. Or does it?

    Read the article

  • XSD: Different sub-elements depending on attribute/element value

    - by AndiDog
    Another XSD question - how can I achieve that the following XML elements are both valid: <some-element> <type>1</type> <a>...</a> </some-element> <some-element> <type>2</type> <b>...</b> </some-element> The sub-elements (either <a> or <b>) should depend on the content of <type> (could also be an attribute). It would be so simple in RelaxNG - but RelaxNG doesn't support key integrity :( Is there a way to implement this in XSD? Note: XML schema version 1.1 supports <xs:alternative>, which might be a solution, but afaik no reference implementation (e.g. libxml2) supports this yet. So I'm searching for workarounds. The only way I've come up with is: <type>1</type> <some-element type="1"> <!-- simple <xs:choice> between <a> and <b> goes here --> <a>...</a> </some-element> <!-- and now create a keyref between <type> and @type -->

    Read the article

  • Model class for NSDictionary information with Lazy Loading

    - by samfu_1
    My application utilizes approx. 50+ .plists that are used as NSDictionaries. Several of my view controllers need access to the properties of the dictionaries, so instead of writing duplicate code to retrieve the .plist, convert the values to a dictionary, etc, each time I need the info, I thought a model class to hold the data and supply information would be appropriate. The application isn't very large, but it does handle a good deal of data. I'm not as skilled in writing model classes that conform to the MVC paradigm, and I'm looking for some strategies for this implementation that also supports lazy loading.. This model class should serve to supply data to any view controller that needs it and perform operations on the data (such as adding entries to dictionaries) when requested by the controller functions currently planned: returning the count on any dictionary adding one or more dictionaries together Currently, I have this method for supporting the count lookup for any dictionary. Would this be an example of lazy loading? -(NSInteger)countForDictionary: (NSString *)nameOfDictionary { NSBundle *bundle = [NSBundle mainBundle]; NSString *plistPath = [bundle pathForResource: nameOfDictionary ofType: @"plist"]; //load plist into dictionary NSMutableDictionary *dictionary = [[NSMutableDictionary alloc] initWithContentsOfFile: plistPath]; NSInteger count = [dictionary count] [dictionary release]; [return count] }

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • Doctrine - get the offset of an object in a collection (implementing an infinite scroll)

    - by dan
    I am using Doctrine and trying to implement an infinite scroll on a collection of notes displayed on the user's browser. The application is very dynamic, therefore when the user submits a new note, the note is added to the top of the collection straightaway, besides being sent (and stored) to the server. Which is why I can't use a traditional pagination method, where you just send the page number to the server and the server will figure out the offset and the number of results from that. To give you an example of what I mean, imagine there are 20 notes displayed, then the user adds 2 more notes, therefore there are 22 notes displayed. If I simply requests "page 2", the first 2 items of that page will be the last two items of the page currently displayed to the user. Which is why I am after a more sophisticated method, which is the one I am about to explain. Please consider the following code, which is part of the server code serving an AJAX request for more notes: // $lastNoteDisplayedId is coming from the AJAX request $lastNoteDisplayed = $repository->findBy($lastNoteDisplayedId); $allNotes = $repository->findBy($filter, array('createdAt' => 'desc')); $offset = getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId); // retrieve the page to send back so that it can be appended to the listing $notesPerPage = 30 $notes = $repository->findBy( array(), array('createdAt' => 'desc'), $notesPerPage, $offset ); $response = json_encode($notes); return $response; Basically I would need to write the method getLastNoteDisplayedOffset, that given the whole set of notes and one particoular note, it can give me its offset, so that I can use it for the pagination of the previous Doctrine statement. I know probably a possible implementation would be: getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId) { $i = 0; foreach ($allNotes as $note) { if ($note->getId() === $lastNoteDisplayedId->getId()) { break; } $i++; } return $i; } I would prefer not to loop through all notes because performance is an important factor. I was wondering if Doctrine has got a method itself or if you can suggest a different approach.

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • When is a method eligible to be inlined by the CLR?

    - by Ani
    I've observed a lot of "stack-introspective" code in applications, which often implicitly rely on their containing methods not being inlined for their correctness. Such methods commonly involve calls to: MethodBase.GetCurrentMethod Assembly.GetCallingAssembly Assembly.GetExecutingAssembly Now, I find the information surrounding these methods to be very confusing. I've heard that the run-time will not inline a method that calls GetCurrentMethod, but I can't find any documentation to that effect. I've seen posts on StackOverflow on several occasions, such as this one, indicating the CLR does not inline cross-assembly calls, but the GetCallingAssembly documentation strongly indicates otherwise. There's also the much-maligned [MethodImpl(MethodImpOptions.NoInlining)], but I am unsure if the CLR considers this to be a "request" or a "command." Note that I am asking about inlining eligibility from the standpoint of contract, not about when current implementations of the JITter decline to consider methods because of implementation difficulties, or about when the JITter finally ends up choosing to inline an eligible method after assessing the trade-offs. I have read this and this, but they seem to be more focused on the last two points (there are passing mentions of MethodImpOptions.NoInlining and "exotic IL instructions", but these seem to be presented as heuristics rather than as obligations). When is the CLR allowed to inline?

    Read the article

  • Behavior difference between UIView.subviews and [NSView subviews]

    - by zpasternack
    I have a piece of code in an iPhone app, which removes all subviews from a UIView subclass. It looks like this: NSArray* subViews = self.subviews; for( UIView *aView in subViews ) { [aView removeFromSuperview]; } This works fine. In fact, I never really gave it much thought until I tried nearly the same thing in a Mac OS X app (from an NSView subclass): NSArray* subViews = [self subviews]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } That totally doesn’t work. Specifically, at runtime, I get this: *** Collection <NSCFArray: 0x1005208a0> was mutated while being enumerated. I ended up doing it like so: NSArray* subViews = [[self subviews] copy]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } [subViews release]; That's fine. What’s bugging me, though, is why does it work on the iPhone? subviews is a copy property: @property(nonatomic,readonly,copy) NSArray *subviews; My first thought was, maybe @synthesize’d getters return a copy when the copy attribute is specified. The doc is clear on the semantics of copy for setters, but doesn’t appear to say either way for getters (or at least, it’s not apparent to me). And actually, doing a few tests of my own, this clearly does not seem to be the case. Which is good, I think returning a copy would be problematic, for a few reasons. So the question is: how does the above code work on the iPhone? NSView is clearly returning a pointer to the actual array of subviews, and perhaps UIView isn’t. Perhaps it’s simply an implementation detail of UIView, and I shouldn’t get worked up about it. Can anyone offer any insight?

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • Jquery Apache - IE problem

    - by Soldierflup
    I'm having a button tag on my page with a value. <button class='btn' value='value'>show value</button> I have this jquery code : $('.btn').click(function() { var w = 'value = '+$(this).val()+' / text = '+$(this).html(); alert(w); }); In FF, no problem the result is ok (display: value = value / text = show value). The problem comes with IE8 which displays a different results from my testing server and the production server. The testing server is my local machine with a standard XAMPP installation. The productionserver is a server based on linux with apache, php and mysql. Result from the testing server is ok (display like FF), the result from the production server is not good (displaying : value = show value / text : show value). Anyone an idea if it is apache that causes the error ? I know there are some issues with the use of val() because IE is considering it as an attribute and not a value. The problem is that changing the jQuery from val() to attr('value') is quit a lot of work (this implementation is already on a lot of pages) and I think it could be much easier to change something on the webserver.

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • is it possible to write a program which prints its own source code utilizing a "sequence-generating-

    - by guest
    is it possible to write a program which prints its own source code utilizing a "sequence-generating-function"? what i call a sequence-generating-function is simply a function which returns a value out of a specific interval (i.e. printable ascii-charecters (32-126)). the point now is, that this generated sequence should be the programs own source-code. as you see, implementing a function which returns an arbitrary sequence is really trivial, but since the returned sequence must contain the implementation of the function itself it is a highly non-trivial task. this is how such a program (and its corresponding output) could look like #include <stdio.h> int fun(int x) { ins1; ins2; ins3; . . . return y; } int main(void) { int i; for ( i=0; i<size of the program; i++ ) { printf("%c", fun(i)); } return 0; } i personally think it is not possible, but since i don't know very much about the underlying matter i posted my thoughts here. i'm really looking forward to hear some opinions!

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • Can we have a component-scoped bean in a JSF2 composite component?

    - by Pradyumna
    Hi, I was wondering how I could create "component-scoped" beans, or so-to-say, "local variables inside a composite component" that are private to the instance of the composite component, and live as long as that instance lives. Below are more details, explained with an example: Suppose there is a "calculator" component - something that allows users to type in a mathematical expression, and evaluates its value. Optionally, it also plots the associated function. I can make a composite component that has: a text box for accepting the math expression two buttons called "Evaluate", and "Plot" another nested component that plots the function It is evidently a self-contained piece of function; so that somebody who wants to use it may just say <math:expressionEvaluator /> But obviously, the implementation would need a java object - something that evaluates the expression, something that computes the plot points, etc. - and I imagine it can be a bean - scoped just for this instance of this component, not a view-scoped or request-scoped bean that is shared across all instances of the component. How do I create such a bean? Is that even possible with composite components?

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • Strange Effect with Overridden Properties and Reflection

    - by naacal
    I've come across a strange behaviour in .NET/Reflection and cannot find any solution/explanation for this: Class A { public string TestString { get; set; } } Class B : A { public override string TestString { get { return "x"; } } } Since properties are just pairs of functions (get_PropName(), set_PropName()) overriding only the "get" part should leave the "set" part as it is in the base class. And this is just what happens if you try to instanciate class B and assign a value to TestString, it uses the implementation of class A. But what happens if I look at the instantiated object of class B in reflection is this: PropertyInfo propInfo = b.GetType().GetProperty("TestString"); propInfo.CanRead ---> true propInfo.CanWrite ---> false(!) And if I try to invoke the setter from reflection with: propInfo.SetValue("test", b, null); I'll even get an ArgumentException with the following message: Property set method not found. Is this as expected? Because I don't seem to find a combination of BindingFlags for the GetProperty() method that returns me the property with a working get/set pair from reflection.

    Read the article

  • iPhone app. Creating a custom UIView that contains UITextField and UIButton.

    - by Dmitry Burchik
    Hi all. I am new to iPhone programming. And I have an issue. I need to create a custom user control that I will add to my UIScrollView dinamically. The control has an UITextField and an UIButton. See the code below: #import <UIKit/UIKit.h> @interface FieldWithValueControl : UIView { UITextField *txtTagName; UIButton *addButton; } @property (nonatomic, readonly) UITextField *txtTagName; @property (nonatomic, readonly) UIButton *addButton; @end #import "FieldWithValueControl.h" #define ITEM_SPACING 10 #define ITEM_HEIGHT 20 #define SWITCHBOX_WIDTH 100 #define SCREEN_WIDTH 320 #define ITEM_FONT_SIZE 14 #define TEXTBOX_WIDTH 150 @implementation FieldWithValueControl @synthesize txtTagName; @synthesize addButton; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code txtTagName = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, TEXTBOX_WIDTH, ITEM_HEIGHT)]; txtTagName.borderStyle = UITextBorderStyleRoundedRect; addButton = [UIButton buttonWithType:UIButtonTypeContactAdd]; [addButton setFrame:CGRectMake(ITEM_SPACING + TEXTBOX_WIDTH, 0, ITEM_HEIGHT, ITEM_HEIGHT)]; [addButton addTarget:self action:@selector(addButtonTouched:) forControlEvents:UIControlEventTouchUpInside]; [self addSubview:txtTagName]; [self addSubview:addButton]; } return self; } - (void)addButtonTouched:sender { UIButton *button = (UIButton*)sender; NSString *title = [button titleLabel].text; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [txtTagName release]; [addButton release]; [super dealloc]; } @end In my code I create an object of that class and add it to scrollView on form. FieldWithValueControl *newTagControl = (FieldWithValueControl*)[[FieldWithValueControl alloc] initWithFrame:CGRectMake(ITEM_SPACING, currentOffset + ITEM_SPACING, 0, 0)]; [scrollView addSubview:newTagControl]; The control looks fine, but if I click to the textbox or to the button nothing happens. Keyboard doesn't appear, the button is not clickable etc.

    Read the article

  • SIlverlight Navigate: how does it work? How would you implement in f# w/o VS wizards and helpers?

    - by akaphenom
    After a nights sleep the problem can be stated more accurately as I have a 100% f# / silverlight implementation and am looking to use the built in Navigation components. C# creates page.xaml and page.xaml.cs um - ok; but what is the relationship at a fundamental level? How would I go about doing this in f#? The applcuation is loaded in the default module, and I pull the XAML in and reference it from the application object. Do I need to create instances / references to the pages from within the application object? Or set up some other page management object with the proper name value pairs? When all the Help of VS is stripped away - what are we left with? original post (for those who may be reading replies) I have a 100% silverlight 3.0 / f# 2.0 application I am wrapping my brain around. I have the base application loading correctly - and now I want to add the naigation controls to it. My page is stored as an embedded resource - but the Frame.Navigate takes a URI. I know what I have is wrong but here it is: let nav : Frame = mainGrid ? mainFrame let url = "/page1.xaml" let uri = new System.Uri(url, System.UriKind.Relative) ; nav.Navigate uri Any thoughts?

    Read the article

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • Want to mimic iphone address book search (In terms of interface & interaction)

    - by mr-sk
    Hi, When you do a search in the address book the flow is: 1) Select the search bar 2) The right most transparent a-z index is removed 3) A transparent black window is placed over the current UITableView 4) When you begin typing, a new UITableView is loaded with no data 5) The UITableView is populated with data as you type. 6) If you select an item you are brought to it 7) If you cancel the search you return to the main UITalbeView My first real question is, how do I load a new UITableView when a user begins searching? Is it as easy as popping a new view on the stack? It would then be a seperate .m/.h file with its own implementation? The second question is how do you remove the right most index? Or just that just go away when you render the new (blank) UITableView? I've gotten search working, but mine does it in the same UITableView, which when you start contains like 2K results, is grouped (A results under A Heading, etc) and has a right most index. I'd be happy leaving the results in the table, if I could; 1) Remove the rightmost a-z index 2) Drop the table groupings 3) Tell the view I only have N search results so it will build the scroll bar correctly. Thanks for your input; sk

    Read the article

  • How does one gets started with Winforms style applications on Win32?

    - by Billy ONeal
    EDIT: I'm extremely tired and frustrated at the moment -- please ignore that bit in this question -- I'll edit it in the morning to be better. Okay -- a bit of background: I'm a C++ programmer mostly, but the only GUI stuff I've ever done was on top of .NET's WinForms platform. I'm completely new to Windows GUI programming, and despite Petzold's excellent book, I'm extremely confused. Namely, it seems that most every reference on getting started with Win32 is all about drawing lines and curves and things -- a topic about which (at least at present time) I couldn't care less. I need a checked list box, a splitter, and a textbox -- something that would take less than 10 minutes to do in Winforms land. It has been recommended to me to use the WTL library, which provides an implementation of all three of these controls -- but I keep getting hung up on simple things, such as getting the damn controls to use the right font, and getting High DPI working correctly. I've spent two freaking days on this, and I can't help but think there has to be a better reference for these kinds of things than I've been able to find. Petzold's book is good, but it hasn't been updated since Windows 95 days, and there's been a LOT changed w.r.t. how applications should be correctly developed since it was published. I guess what I'm looking for is a modern Petzold book. Where can I find such a resource, if any?

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >