Search Results

Search found 82 results on 4 pages for 'ellery newcomer'.

Page 4/4 | < Previous Page | 1 2 3 4 

  • Setting default values for inherited property without using accessor in Objective-C?

    - by Ben Stock
    I always see people debating whether or not to use a property's setter in the -init method. I don't know enough about the Objective-C language yet to have an opinion one way or the other. With that said, lately I've been sticking to ivars exclusively. It seems cleaner in a way. I don't know. I digress. Anyway, here's my problem … Say we have a class called Dude with an interface that looks like this: @interface Dude : NSObject { @private NSUInteger _numberOfGirlfriends; } @property (nonatomic, assign) NSUInteger numberOfGirlfriends; @end And an implementation that looks like this: @implementation Dude - (instancetype)init { self = [super init]; if (self) { _numberOfGirlfriends = 0; } } @end Now let's say I want to extend Dude. My subclass will be called Playa. And since a playa should have mad girlfriends, when Playa gets initialized, I don't want him to start with 0; I want him to have 10. Here's Playa.m: @implementation Playa - (instancetype)init { self = [super init]; if (self) { // Attempting to set the ivar directly will result in the compiler saying, // "Instance variable `_numberOfGirlfriends` is private." // _numberOfGirlfriends = 10; <- Can't do this. // Thus, the only way to set it is with the mutator: self.numberOfGirlfriends = 10; // Or: [self setNumberOfGirlfriends:10]; } } @end So what's a Objective-C newcomer to do? Well, I mean, there's only one thing I can do, and that's set the property. Unless there's something I'm missing. Any ideas, suggestions, tips, or tricks? Sidenote: The other thing that bugs me about setting the ivar directly — and what a lot of ivar-proponents say is a "plus" — is that there are no KVC notifications. A lot of times, I want the KVC magic to happen. 50% of my setters end in [self setNeedsDisplay:YES], so without the notification, my UI doesn't update unless I remember to manually add -setNeedsDisplay. That's a bad example, but the point stands. I utilize KVC all over the place, so without notifications, things can act wonky. Anyway, any info is much appreciated. Thanks!

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • Hibernate - EhCache - Which region to Cache associations/sets/collections ??

    - by lifeisnotfair
    Hi all, I am a newcomer to hibernate. It would be great if someone could comment over the following query that i have: Say i have a parent class and each parent has multiple children. So the mapping file of parent class would be something like: parent.hbm.xml <hibernate-mapping > <class name="org.demo.parent" table="parent" lazy="true"> <cache usage="read-write" region="org.demo.parent"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <set name="children" lazy="true"> <cache usage="read-write" region="org.demo.parent.children" /> <key column="parent_id"/> <one-to-many class="org.demo.children"/> </set> </class> </hibernate-mapping> children.hbm.xml <hibernate-mapping > <class name="org.demo.children" table="children" lazy="true"> <cache usage="read-write" region="org.demo.children"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <many-to-one name="parent_id" column="parent_id" type="integer" length="10" not-null="true"/> </class> </hibernate-mapping> So for the set children, should we specify the region org.demo.parent.children where it should cache the association or should we use the cache region of org.demo.children where the children would be getting cached. I am using EHCache as the 2nd level cache provider. I tried to search for the answer to this question but couldnt find any answer in this direction. It makes more sense to use org.demo.children but I dont know in which scenarios one should use a separate cache region for associations/sets/collections as in the above case. Kindly provide your inputs also let me know if I am not clear in my question. Thanks all.

    Read the article

  • WPF TabControl - how to preserve control state within tab items (MVVM pattern)

    - by Tim Coulter
    I am a newcomer to WPF, attempting to build a project that follows the recommendations of Josh Smith's excellent article describing The Model-View-ViewModel Design Pattern. Using Josh's sample code as a base, I have created a simple application that contains a number of "workspaces", each represented by a tab in a TabControl. In my application, a workspace is a document editor that allows a hierarchical document to be manipulated via a TreeView control. Although I have succeeded in opening multiple workspaces and viewing their document content in the bound TreeView control, I find that the TreeView "forgets" its state when switching between tabs. For example, if the TreeView in Tab1 is partially expanded, it will be shown as fully collapsed after switching to Tab2 and returning to Tab1. This behaviour appears to apply to all aspects of control state for all controls. After some experimentation, I have realized that I can preserve state within a TabItem by explicitly binding each control state property to a dedicated property on the underlying ViewModel. However, this seems like a lot of additional work, when I simply want all my controls to remember their state when switching between workspaces. I assume I am missing something simple, but I am not sure where to look for the answer. Any guidance would be much appreciated. Thanks, Tim Update: As requested, I will attempt to post some code that demonstrates this problem. However, since the data that underlies the TreeView is complex, I will post a simplified example that exhibits the same symtoms. Here is the XAML from the main window: <TabControl IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Docs}"> <TabControl.ItemTemplate> <DataTemplate> <ContentPresenter Content="{Binding Path=Name}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> <view:DocumentView /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> The above XAML correctly binds to an ObservableCollection of DocumentViewModel, whereby each member is presented via a DocumentView. For the simplicity of this example, I have removed the TreeView (mentioned above) from the DocumentView and replaced it with a TabControl containing 3 fixed tabs: <TabControl> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> In this scenario, there is no binding between the DocumentView and the DocumentViewModel. When the code is run, the inner TabControl is unable to remember its selection when the outer TabControl is switched. However, if I explicitly bind the inner TabControl's SelectedIndex property ... <TabControl SelectedIndex="{Binding Path=SelectedDocumentIndex}"> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> ... to a corresponding dummy property on the DocumentViewModel ... public int SelecteDocumentIndex { get; set; } ... the inner tab is able to remember its selection. I understand that I can effectively solve my problem by applying this technique to every visual property of every control, but I am hoping there is a more elegant solution.

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

< Previous Page | 1 2 3 4