Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 246/336 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Erlang, SSH and authorized_keys

    - by Roberto Aloi
    Playing with the ssh and public_key application in Erlang, I've discovered a nice feature. I was trying to connect to my running Erlang SSH daemon by using a rsa key, but the authentication was failing and I was prompted for a password. After some debugging and tracing (and a couple of coffees), I've realized that, for some weird reason, a non valid key for my user was there. The authorized_keys file contained two keys. The wrong one was at some point in the file, while the correct one was appended at the end of the file. Now, the Erlang SSH application, when diffing the provided key with the ones contained in the authorized_keys, it was finding the first entry (completely ignoring the second on - the correct one). Then, it was switching to different authentication mechanism (at first it was trying dsa instead of rsa and then it was prompting for a password). The question is: Is this behavior intended or should the SSH server check for multiple entries for the same user in the *authorized_keys* file? Is this a generic SSH behaviour or it's just specific to the Erlang implementation?

    Read the article

  • How to map different UI views in a RESTful web application?

    - by MicE
    Hello, I'm designing a web application, which will support both standard UIs (accessed via browsers) and a RESTful API (an XML/JSON-based web service). User agents will be able to differentiate between these by using different values in the Accept HTTP header. The RESTful API will use the following URI structure (example for an "article" resource): GET /article/ - gets a list of articles POST /article/ - adds a new article PUT /article/{id} - updates an existing article based on {id} DELETE /article/{id} - deletes an existing article based on {id} The UI part of the application will however need to support multiple views, for example: a standard resource view a view for submitting a new resource a view for editing an existing resource a view for deleting an existing resource (i.e. display delete confirmation) Note that the latter three views are still accessed via GET, even though they are processed via overloaded POST. Possible solution: Introduce additional parameters (keywords) into URIs which would identify individual views - i.e. on top of the above, the application would support the following URIs (but only for Content-Type: text/html): GET /article/add - displays a form for adding a new article (fetched via GET, processed via POST) GET /article/123 - displays article 123 in "view" mode (fetched via GET) GET /article/123/edit - displays article 123 in "edit" mode (fetched via GET, processed via PUT overloaded as POST) GET /article/123/delete - displays "delete" confirmation for article 123 (fetched via GET, processed via DELETE overloaded as POST) A better implementation of the above might be to put the add/edit/delete keywords into a GET parameter - since they do not change the resource we're working with, it might be better to keep the base URI same for all of them. My question is: How would you map the above URI structure to UIs served to the regular user, considering that there can be several views per each resource, please? Do you agree with the possible solution detailed above, or would you recommend a different approach based on your experience? NB: we've already implemented an application which consists of a standalone RESTful API and a standalone web application. I'm currently looking into options for future projects where these two would be merged together (i.e. in order to reduce overhead). Thank you, M.

    Read the article

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • How to unit test this simple ASP.NET MVC controller

    - by Frank Schwieterman
    Lets say I have a simple controller for ASP.NET MVC I want to test. I want to test that a controller action (Foo, in this case) simply returns a link to another action (Bar, in this case). How would you test this? (either the first or second link) My implementation has the same link twice. One passes the url throw ViewData[]. This seems more testable to me, as I can check the ViewData collection returned from Foo(). Even this way though, I don't know how to validate the url itself without making dependencies on routing. The controller: public class TestController : Controller { public ActionResult Foo() { ViewData["Link2"] = Url.Action("Bar"); return View("Foo"); } public ActionResult Bar() { return View("Bar"); } } the "Foo" view: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" MasterPageFile="~/Views/Shared/Site.Master"%> <asp:Content ContentPlaceHolderID="MainContent" runat="server"> <%= Html.ActionLink("link 1", "Bar") %> <a href="<%= ViewData["Link2"]%>">link 2</a> </asp:Content>

    Read the article

  • Quick help refactoring Ruby Class

    - by mplacona
    I've written this class that returns feed updates, but am thinking it can be further improved. It's not glitchy or anything, but as a new ruby developer, I reckon it's always good to improve :-) class FeedManager attr_accessor :feed_object, :update, :new_entries require 'feedtosis' def initialize(feed_url) @feed_object = Feedtosis::Client.new(feed_url) fetch end def fetch @feed_object.fetch end def update @updates = fetch end def updated? @updates.new_entries.count > 0 ? true : false end def new_entries @updates.new_entries end end As you can see, it's quite simple, but the things I'm seeing that aren't quite right are: Whenever I call fetch via terminal, it prints a list with the updates, when it's really supposed return an object. So as an example, in the terminal if I do something like: client = Feedtosis::Client.new('http://stackoverflow.com/feeds') result = client.fetch I then get: <Curl::Easy http://stackoverflow.com/feeds> Which is exactly what I'd expect. However, when doing the same thing with "inniting" class with: FeedManager.new("http://stackoverflow.com/feeds") I'm getting the object returning as an array with all the items on the feed. Sure I'm doing something wrong, so any help refactoring this class will he greatly appreciated. Also, I'd like to see comments about my implementation, as well as any sort of comment to make it better would be welcome. Thanks in advance

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • MS Access Interop - How to set print filename?

    - by Ryan
    Hi all, I'm using Delphi 2009 and the MS Access Interop COM API. I'm trying to figure out two things, but one is more important than the other right now. I need to know how to set the file name when sending the print job to the spooler. Right now it's defaulting to the Access DB's name, which can be something different than the file's name. I need to just ensure that when this is printed it enters the print spool using the same filename as the actual file itself - not the DB's name. My printer spool is actually a virtual print driver that converts documents to an image. That's my main issue. The second issue is how to specify which printer to use. This is less important at the moment because I'm just using the default printer for now. It would be nice if I could specify the printer to use, though. Does anyone know either of these two issues? Thank you in advance. I'll go ahead and paste my code: unit Converter.Handlers.Office.Access; interface uses sysutils, variants, Converter.Printer, Office_TLB, Access_TLB, UDC_TLB; procedure ToTiff(p_Printer: PrinterDriver; p_InputFile, p_OutputFile: String); implementation procedure ToTiff(p_Printer: PrinterDriver; p_InputFile, p_OutputFile: String); var AccessApp : AccessApplication; begin AccessApp := CoAccessApplication.Create; AccessApp.Visible := False; try AccessApp.OpenCurrentDatabase(p_InputFile, True, ''); AccessApp.RunCommand(acCmdQuickPrint); AccessApp.CloseCurrentDatabase; finally AccessApp.Quit(acQuitSaveNone); end; end; end.

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • Why does my ActivePerl program report 'Sorry. Ran out of threads'?

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • How to make a Global Array?

    - by Wayfarer
    So, I read this post, and it's pretty much exactly what I was looking for. However... it doesn't work. I guess I'm not going to go with the singleton object, but rather making the array in either a Global.h file, or insert it into the _Prefix file. Both times I do that though, I get the error: Expected specifier-qualifier-list before 'static' and it doesn't work. So... I'm not sure how to get it to work, I can remove extern and it works, but I feel like I need that to make it a constant. The end goal is to have this Mutable Array be accessible from any object or any file in my project. Help would be appreciated! This is the code for my Globals.h file: #import <Foundation/Foundation.h> @interface Globals : NSObject { static extern NSMutableArray * myGlobalArray; } @end I don't think I need anything in the implementation file. If I were to put that in the prefix file, the error was the same.

    Read the article

  • Good patterns for loose coupling in Java?

    - by Eye of Hell
    Hello. I'm new to java, and while reading documentation so far i can't find any good ways for programming with loose coupling between objects. For majority of languages i know (C++, C#, python, javascript) i can manage objects as having 'signals' (notification about something happens/something needed) and 'slots' (method that can be connected to signal and process notification/do some work). In all mentioned languages i can write something like this: Object1 = new Object1Class(); Object2 = new Object2Class(); Connect( Object1.ItemAdded, Object2.OnItemAdded ); Now if object1 calls/emits ItemAdded, the OnItemAdded method of Object2 will be called. Such loose coupling technique is often referred as 'delegates', 'signal-slot' or 'inversion of control'. Compared to interface pattern, technique mentioned don't need to group signals into some interfaces. Any object's methods can be connected to any delegate as long as signatures match ( C++Qt even extends this by allowing only partial signature match ). So i don't need to write additional interface code for each methods / groups of methods, provide default implementation for interface methods not used etc. And i can't see anything like this in Java :(. Maybe i'm looking a wrong way?

    Read the article

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • NSMutableURLRequest wont turn on 3G

    - by Kyle
    I won't bother posting any code because my code works fine when I launch Safari or something that turns on the 3G connection first. When I put it Airport mode, then put back OFF, I will get the behavior of a connection error saying No internet connection with NSMutableURLRequest. I've personally mailed apple about how the Ant gets turned on and off, and they said anything that uses their base CFSocketStream object will turn on the Ant, and I quote: "such as NSURLRequest".. BSD Sockets do not, just so everyone knows.. However, I assumed NSMutableURLRequest would fall into the category of 'Does', but it seems like it doesnt. It never succeeds for me when I cycle Airport or have the phone idle for a long time. It will ALWAYS succeed when I open any other network app first to turn the Ant on. Shall I be forced to do a dummy NSURLRequest call to turn this on; or has anyone been able to get this class working just fine? Remember this is just an implementation of NSMutableURLRequest to upload a file Async, and no other Network operations are performed anywhere; no ads, no version checks, no nothing.

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Pong: How does the paddle know where the ball will hit?

    - by Roflcoptr
    After implementing Pacman and Snake I'm implementing the next very very classic game: Pong. The implementation is really simple, but I just have one little problem remaining. When one of the paddle (I'm not sure if it is called paddle) is controlled by the computer, I have trouble to position it at the correct position. The ball has a current position, a speed (which for now is constant) and a direction angle. So I could calculate the position where it will hit the side of the computer controlled paddle. And so Icould position the paddle right there. But however in the real game, there is a probability that the computer's paddle will miss the ball. How can I implement this probability? If I only use a probability of lets say 0.5 that the computer's paddle will hit the ball, the problem is solved, but I think it isn't that simple. From the original game I think the probability depends on the distance between the current paddle position and the position the ball will hit the border. Does anybody have any hints how exactly this is calculated?

    Read the article

  • Is using SharePoint as a intranet/extranet portal a good idea?

    - by Rob
    I work for a fortune 500 company in IT and we have developed many systems/applications to do a variety of things. We are in need of some commonality of these applications and a better portal/dashboard/landing page for these applications. So, our customers and employees would log into this portal and see all the "things" that they can do which then link to their own application. This could maybe just iframe in each application inside of this portal to keep brand and navigation consistency. We are trying to decide whether to use SharePoint 2007 or 2010 for this or develop a portal/dashboard of sorts in house. We would like this portal to look and feel very branded to our needs and really not even feel like its using SharePoint (if needed). An example is to provide our own Menu control that drives the navigation if needed. Does anyone have any pros/cons for using SharePoint in such a way? Any advice on implementation (e.g. use 2010, much easier to customize design than 2007, etc)?

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • Obj-C: ++variable is increasing by two instead of one

    - by Eli Garfinkel
    I am writing a program that asks users yes/no questions to help them decide how to vote in an election. I have a variable representing the question number called questionnumber. Each time I go through the switch-break loop, I add 1 to the questionnumber variable so that the next question will be displayed. This works fine for the first two questions. But then it skips the third question and moves on to the fourth. When I have more questions in the list, it skips every other question. Somewhere, for some reasons, the questionnumber variable is increasing when I don't want it to. Please look at the code below and tell me what I'm doing wrong. Thank you! Eli import "MainView.h" import @implementation MainView @synthesize Question; @synthesize mispar; int conservative = 0; int liberal = 0; int questionnumber = 1; (IBAction)agreebutton:(id)sender { ++liberal; } (IBAction)disagreebutton:(id)sender { ++conservative; } (IBAction)nextbutton:(id)sender { ++questionnumber; switch (questionnumber) { case 2: Question.text = @"Congress should pass a law that would ban Americans from earning more than one hundred million dollars in any given year."; break; case 3: Question.text = @"It is not fair to admit people to a university or employ them on the basis of merit alone. Factors such as race, gender, class, and sexual orientation must also be considered."; break; case 4: Question.text = @"There are two Americas - one for the rich and one for the poor."; break; case 5: Question.text = @"Top quality health care should be free for all."; break; default: break; } } @end

    Read the article

  • Use Objective-C without NSObject?

    - by Alex I
    I am testing some simple Objective-C code on Windows (cygwin, gcc). This code already works in Xcode on Mac. I would like to convert my objects to not subclass NSObject (or anything else, lol). Is this possible, and how? What I have so far: // MyObject.h @interface MyObject - (void)myMethod:(int) param; @end and // MyObject.m #include "MyObject.h" @interface MyObject() { // this line is a syntax error, why? int _field; } @end @implementation MyObject - (id)init { // what goes in here? return self; } - (void)myMethod:(int) param { _field = param; } @end What happens when I try compiling it: gcc -o test MyObject.m -lobjc MyObject.m:4:1: error: expected identifier or ‘(’ before ‘{’ token MyObject.m: In function ‘-[MyObject myMethod:]’: MyObject.m:17:3: error: ‘_field’ undeclared (first use in this function) EDIT My compiler is cygwin's gcc, also has cygwin gcc-objc package: gcc --version gcc (GCC) 4.7.3 I have tried looking for this online and in a couple of Objective-C tutorials, but every example of a class I have found inherits from NSObject. Is it really impossible to write Objective-C without Cocoa or some kind of Cocoa replacement that provides NSObject? (Yes, I know about GNUstep. I would really rather avoid that if possible...)

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • How to avoid using this in a constructor

    - by Paralife
    I have this situation: interface MessageListener { void onMessageReceipt(Message message); } class MessageReceiver { MessageListener listener; public MessageReceiver(MessageListener listener, other arguments...) { this.listener = listener; } loop() { Message message = nextMessage(); listener.onMessageReceipt(message); } } and I want to avoid the following pattern: (Using the this in the Client constructor) class Client implements MessageListener { MessageReceiver receiver; MessageSender sender; public Client(...) { receiver = new MessageReceiver(this, other arguments...); sender = new Sender(...); } . . . @Override public void onMessageReceipt(Message message) { if(Message.isGood()) sender.send("Congrtulations"); else sender.send("Boooooooo"); } } The reason why i need the above functionality is because i want to call the sender inside the onMessageReceipt() function, for example to send a reply. But I dont want to pass the sender into a listener, so the only way I can think of is containing the sender in a class that implements the listener, hence the above resulting Client implementation. Is there a way to achive this without the use of 'this' in the constructor? It feels bizare and i dont like it, since i am passing myself to an object(MessageReceiver) before I am fully constructed. On the other hand, the MessageReceiver is not passed from outside, it is constructed inside, but does this 'purifies' the bizarre pattern? I am seeking for an alternative or an assurance of some kind that this is safe, or situations on which it might backfire on me.

    Read the article

  • What is the name of this geometrical function?

    - by Spike
    In a two dimentional integer space, you have two points, A and B. This function returns an enumeration of the points in the quadrilateral subset bounded by A and B. A = {1,1} B = {2,3} Fn(A,B) = {{1,1},{1,2},{1,3},{2,1},{2,2},{2,3}} I can implement it in a few lines of LINQ. private void UnknownFunction(Point to, Point from, List<Point> list) { var vectorX = Enumerable.Range(Math.Min(to.X, from.X), Math.Abs(to.X - from.Y) + 1); var vectorY = Enumerable.Range(Math.Min(to.Y, from.Y), Math.Abs(to.Y - from.Y) + 1); foreach (var x in vectorX) foreach (var y in vectorY) list.Add(new Point(x, y)); } I'm fairly sure that this is a standard mathematical operation, but I can't think what it is. Feel free to tell me that it's one line of code in your language of choice. Or to give me a cunning implementation with lambdas or some such. But mostly I just want to know what it's called. It's driving me nuts. It feels a little like a convolution, but it's been too long since I was at school for me to be sure.

    Read the article

  • Detecting UITableView scrolling

    - by Xeph
    Hi I've subclassed UITableView (as KRTableView) and implemented the four touch-based methods (touchesBegan, touchesEnded, touchesMoved, and touchesCancelled) so that I can detect when a touch-based event is being handled on a UITableView. Essentially what I need to detect is when the UITableView is scrolling up or down. However, subclassing UITableView and creating the above methods only detects when scrolling or finger movement is occuring within a UITableViewCell, not on the entire UITableView. As soon as my finger is moved onto the next cell, the touch events don't do anything. This is how I'm subclassing UITableView: #import "KRTableView.h" @implementation KRTableView - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; NSLog(@"touches began..."); } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; NSLog(@"touchesMoved occured"); } - (void)touchesCancelled:(NSSet*)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; NSLog(@"touchesCancelled occured"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; NSLog(@"A tap was detected on KRTableView"); } @end How can I detect when the UITableView is scrolling up or down?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >