Search Results

Search found 14402 results on 577 pages for 'interface builder'.

Page 537/577 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • C# Extend array type to overload operators

    - by Episodex
    I'd like to create my own class extending array of ints. Is that possible? What I need is array of ints that can be added by "+" operator to another array (each element added to each), and compared by "==", so it could (hopefully) be used as a key in dictionary. The thing is I don't want to implement whole IList interface to my new class, but only add those two operators to existing array class. I'm trying to do something like this: class MyArray : Array<int> But it's not working that way obviously ;). Sorry if I'm unclear but I'm searching solution for hours now... UPDATE: I tried something like this: class Zmienne : IEquatable<Zmienne> { public int[] x; public Zmienne(int ilosc) { x = new int[ilosc]; } public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) { return false; } return base.Equals((Zmienne)obj); } public bool Equals(Zmienne drugie) { if (x.Length != drugie.x.Length) return false; else { for (int i = 0; i < x.Length; i++) { if (x[i] != drugie.x[i]) return false; } } return true; } public override int GetHashCode() { int hash = x[0].GetHashCode(); for (int i = 1; i < x.Length; i++) hash = hash ^ x[i].GetHashCode(); return hash; } } Then use it like this: Zmienne tab1 = new Zmienne(2); Zmienne tab2 = new Zmienne(2); tab1.x[0] = 1; tab1.x[1] = 1; tab2.x[0] = 1; tab2.x[1] = 1; if (tab1 == tab2) Console.WriteLine("Works!"); And no effect. I'm not good with interfaces and overriding methods unfortunately :(. As for reason I'm trying to do it. I have some equations like: x1 + x2 = 0.45 x1 + x4 = 0.2 x2 + x4 = 0.11 There are a lot more of them, and I need to for example add first equation to second and search all others to find out if there is any that matches the combination of x'es resulting in that adding. Maybe I'm going in totally wrong direction?

    Read the article

  • Move options between multiple dropdown lists

    - by Martha
    We currently have a form with the standard multi-select functionality of "here are the available options, here are the selected options, here are some buttons to move stuff back and forth." However, the client now wants the ability to not just select certain items, but to also categorize them. For example, given a list of books, they want to not just select the ones they own, but also the ones they've read, the ones they would like to read, and the ones they've heard about. (All examples fictional.) Thankfully, a selected item can only be in one category at a time. I can find many examples of moving items between listboxes, but not a single one for moving items between multiple listboxes. To add to the complication, the form needs to have two sets of list+categories, e.g. a list of movies that need to be categorized in addition to the aforementioned books. An additional problem is that sorting between lists is all well and good in the javascript-enabled world, but I can't really think of a good fallback interface for, say, mobile browsers. Maybe a pseudo-listbox with radio buttons next to each item? The master list of items will in general be very long - over 100 items, certainly, possibly many more. Any given category will most likely contain one or two selected items, but the possibility exists for a category to have dozens of selected items, or zero selected items. As far as OS and stuff, the site is in classic asp (quit snickering!), the server-side code is VBScript, and so far we've avoided the various Javascript libraries by the simple expedient of almost never using client-side scripting. This one form for this one client is currently the big exception. Give 'em an inch and they want a mile... Oh, and I have to add: I suck at Javascript, or really at any C-descendant language. Curly braces give me hives. I'd really, really like something I can just copy & paste into my page, maybe tweak some variable names, and never look at it again. A girl can dream, can't she? :)

    Read the article

  • Is it a oop good design ?

    - by remi bourgarel
    Hi all, I'd like to know what you think about this part of our program is realized : We have in our database a list of campsite. Partners call us to get all the campsites near a GPS location or all the campsites which provide a bar (we call it a service). So how I realized it ? Here is our database : Campsite - ID - NAME - GPS_latitude - GPS_longitude CampsiteServices -Campsite_ID -Services_ID So my code (c# but it's not relevant, let say it's an OO language) looks like this public class SqlCodeCampsiteFilter{ public string SqlCode; public Dictionary<string, object> Parameters; } interface ISQLCampsiteFilter{ SqlCodeEngineCore CreateSQLCode(); } public class GpsLocationFilter : ISQLCampsiteFilter{ public float? GpsLatitude; public float? GpsLongitude; public SqlCodeEngineCore CreateSQLCode() { --return an sql code to filter on the gps location like dbo.getDistance(@gpsLat,@gpsLong,campsite.GPS_latitude,campsite.GPS_longitude) with the parameters } } public class ServiceFilter : : ISQLCampsiteFilter{ public int[] RequiredServicesID; public SqlCodeEngineCore CreateSQLCode() { --return an sql code to filter on the services "where ID IN (select CampsiteServices.Service_ID FROm CampsiteServices WHERE Service_ID in ...) } } So in my webservice code : List<ISQLFilterEngineCore> filters = new List<ISQLFilterEngineCore>(); if(gps_latitude.hasvalue && gps_longitude.hasvalue){ filters.Add (new GpsLocationFilter (gps_latitude.value,gps_longitude.value)); } if(required_services_id != null){ filters.Add (new ServiceFilter (required_services_id )); } string sql = "SELECT ID,NAME FROM campsite where 1=1" foreach(ISQLFilterEngineCore aFilter in filters){ SqlCodeCampsiteFilter code = aFilter.CreateSQLCode(); sql += code.SqlCode; mySqlCommand.AddParameters(code.Parameters);//add all the parameters to the sql command } return mySqlCommand.GetResults(); 1/ I don't use ORM for the simple reason that the system exists since 10 years and the only dev who is here since the beginning is starting to learn about difference between public and private. 2/ I don't like SP because : we can do override, and t-sql is not so funny to use :) So what do you think ? Is it clear ? Do you have any pattern that I should have a look to ? If something is not clear please ask

    Read the article

  • Expression Too Complex In Access 2007

    - by Jazzepi
    When I try to run this query in Access through the ODBC interface into a MySQL database I get an "Expression too complex in query expression" error. The essential thing I'm trying to do is translate abbreviated names of languages into their full body English counterparts. I was curious if there was some way to "trick" access into thinking the expression is smaller with sub queries, or if someone else had a better idea of how to solve this problem. I thought about making a temporary table and doing a join on it, but that's not supported in Access SQL. Just as an FYI, the query worked fine until I added the big long IFF chain. I tested the query on a smaller IFF chain for three languages, and that wasn't an issue, so the problem definitely stems from the huge IFF chain (It's 26 deep). Also, I might be able to drop some of the options (like combining the different forms of Chinese or Portuguese) As a test, I was able to get the SQL query to work after paring it down to 14 IFF() statements, but that's a far cry from the 26 languages I'd like to represent. SELECT TOP 5 Count( * ) AS [Number of visits by language], IIf(login.lang="ar","Arabic",IIf(login.lang="bg","Bulgarian",IIf(login.lang="zh_CN","Chinese (Simplified Han)",IIf(login.lang="zh_TW","Chinese (Traditional Han)",IIf(login.lang="cs","Czech",IIf(login.lang="da","Danish",IIf(login.lang="de","German",IIf(login.lang="en_US","United States English",IIf(login.lang="en_GB","British English",IIf(login.lang="es","Spanish",IIf(login.lang="fr","French",IIf(login.lang="el","Greek",IIf(login.lang="it","Italian",IIf(login.lang="ko","Korean",IIf(login.lang="hu","Hungarian",IIf(login.lang="nl","Dutch",IIf(login.lang="pl","Polish",IIf(login.lang="pt_PT","European Portuguese",IIf(login.lang="pt_BR","Brazilian Portuguese",IIf(login.lang="ru","Russian",IIf(login.lang="sk","Slovak",IIf(login.lang="sl","Slovenian","IIf(login.lang="fi","Finnish",IIf(login.lang="sv","Swedish",IIf(login.lang="tr","Turkish","Unknown")))))))))))))))))))))))))) AS [Language] FROM login, reservations, reservation_users, schedules WHERE (reservations.start_date Between DATEDIFF('s','1970-01-01 00:00:00',[Starting Date in the Following Format YYYY/MM/DD]) And DATEDIFF('s','1970-01-01 00:00:00',[Ending Date in the Following Format YYYY/MM/DD])) And reservations.is_blackout=0 And reservation_users.memberid=login.memberid And reservation_users.resid=reservations.resid And reservation_users.invited=0 And reservations.scheduleid=schedules.scheduleid And scheduletitle=[Schedule Title] GROUP BY login.lang ORDER BY Count( * ) DESC; @ Michael Todd I completely agree. The list of languages should have been a table in the database and the login.lang should have been a FK into that table. Unfortunately this isn't how the database was written, and it's not really mine to modify. The languages are placed into the login.lang field by the PHP running on top of the database.

    Read the article

  • Unmanaged Code calling leads to heavy memory leak!!

    - by konnychen
    Maybe I need change the title as "Unmanaged Code calling leads to heavy memory leak!" The leak is around 30M/hour I think maybe I need complete my code here because the memory leak maybe not from a static string whereas my real code derive this string from external device (see new code attached). so I handle also unmanaged code. Could it be possible the leak comes from unmanaged code? But I freed the resouce by Marshal.FreeCoTaskMem(pos); oThread2 = new Thread(new ThreadStart(Cyclic_Call)); oThread2.Start(); delegate void SetText_lab_Statubar(string text); private void m_SetText_lab_Statubar(string text) { if (this.lab_Statubar.InvokeRequired) { SetText_lab_Statubar d = new SetText_lab_Statubar(m_SetText_lab_Statubar); this.Invoke(d, new object[] { text }); } else { this.lab_Statubar.Text = text; } } private void Cyclic_Call() { do { //... ... ReadMatrixCode(Station6, 0, str_Code); this.m_SetText_lab_Statubar(str_Code[4]); Thread.Sleep(100); } while (!b_AbortThraed); } private void ReadMatrixCode(Station st, int ItemNr, string[] str_Code) { IntPtr pItemStates = IntPtr.Zero; IntPtr pErrors = IntPtr.Zero; int NumItems = itemServerHandles.Length; m_SyncIO.Read(DataSrc, NumItems, itemServerHandles, out pItemStates, out pErrors); // This calls external dll which has some of "out IntPtr" errors = new int[NumItems]; Marshal.Copy(pErrors, errors, 0, NumItems); IntPtr pos = pItemStates; // Now get the read values and check errors for (int dwCount = 0; dwCount < NumItems; dwCount++) { result[dwCount] = (ITEMSTATE)Marshal.PtrToStructure(pos, typeof(ITEMSTATE)); pos = (IntPtr)(pos.ToInt32() + Marshal.SizeOf(typeof(ITEMSTATE))); } // Free allocated COM-ressouces Marshal.FreeCoTaskMem(pItemStates); Marshal.FreeCoTaskMem(pErrors); pItemStates = IntPtr.Zero; pErrors = IntPtr.Zero; } m_syncIO is a class and finally it will call COM component which is defined below [Guid("39C12B52-011E-11D0-9675-1020AFD8ADB3")] [InterfaceType(1)] [ComConversionLoss] public interface ISyncIO { void Read(DATASOURCE dwSource, int dwCount, int[] phServer, out IntPtr ppItemValues, out IntPtr ppErrors); void Write(int dwCount, int[] phServer, object[] pItemValues, out IntPtr ppErrors); }

    Read the article

  • Realtime Twitter Replies?

    - by ejunker
    I have created Twitter bots for many geographic locations. I want to allow users to @-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime). Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down. As I see it my options are to use one of the following: REST API This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled. Search API The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request. Streaming API The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax @bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. Third-party service Are there any third-party companies that have access to the Twitter firehouse and offer realtime data? I haven't investigated these, but here are a few that I have found: Gnip Tweet.IM excla.im TwitterSpy - seems to use polling, not realtime I'm leaning towards using the Streaming API. Is there a better way to get near realtime @-replies for many (hundreds) of Twitter accounts?

    Read the article

  • What am I not getting about this abstract class implementation?

    - by Schnapple
    PREFACE: I'm relatively inexperienced in C++ so this very well could be a Day 1 n00b question. I'm working on something whose long term goal is to be portable across multiple operating systems. I have the following files: Utilities.h #include <string> class Utilities { public: Utilities() { }; virtual ~Utilities() { }; virtual std::string ParseString(std::string const& RawString) = 0; }; UtilitiesWin.h (for the Windows class/implementation) #include <string> #include "Utilities.h" class UtilitiesWin : public Utilities { public: UtilitiesWin() { }; virtual ~UtilitiesWin() { }; virtual std::string ParseString(std::string const& RawString); }; UtilitiesWin.cpp #include <string> #include "UtilitiesWin.h" std::string UtilitiesWin::ParseString(std::string const& RawString) { // Magic happens here! // I'll put in a line of code to make it seem valid return ""; } So then elsewhere in my code I have this #include <string> #include "Utilities.h" void SomeProgram::SomeMethod() { Utilities *u = new Utilities(); StringData = u->ParseString(StringData); // StringData defined elsewhere } The compiler (Visual Studio 2008) is dying on the instance declaration c:\somepath\somecode.cpp(3) : error C2259: 'Utilities' : cannot instantiate abstract class due to following members: 'std::string Utilities::ParseString(const std::string &)' : is abstract c:\somepath\utilities.h(9) : see declaration of 'Utilities::ParseString' So in this case what I'm wanting to do is use the abstract class (Utilities) like an interface and have it know to go to the implemented version (UtilitiesWin). Obviously I'm doing something wrong but I'm not sure what. It occurs to me as I'm writing this that there's probably a crucial connection between the UtilitiesWin implementation of the Utilities abstract class that I've missed, but I'm not sure where. I mean, the following works #include <string> #include "UtilitiesWin.h" void SomeProgram::SomeMethod() { Utilities *u = new UtilitiesWin(); StringData = u->ParseString(StringData); // StringData defined elsewhere } but it means I'd have to conditionally go through the different versions later (i.e., UtilitiesMac(), UtilitiesLinux(), etc.) What have I missed here?

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Countdown timer using NSTimer in "0:00" format

    - by Joey Pennacchio
    I have been researching for days on how to do this and nobody has an answer. I am creating an app with 5 timers on the same view. I need to create a timer that counts down from "15:00" (minutes and seconds), and, another that counts down from "2:58" (minutes and seconds). The 15 minute timer should not repeat, but it should stop all other timers when it reaches "00:00." The "2:58" timer should repeat until the "15:00" or "Game Clock" reaches 0. Right now, I have scrapped almost all of my code and I'm working on the "2:58" repeating timer, or "rocketTimer." Does anyone know how to do this? Here is my code: #import <UIKit/UIKit.h> @interface FirstViewController : UIViewController { //Rocket Timer int totalSeconds; bool timerActive; NSTimer *rocketTimer; IBOutlet UILabel *rocketCount; int newTotalSeconds; int totalRocketSeconds; int minutes; int seconds; } - (IBAction)Start; @end and my .m #import "FirstViewController.h" @implementation FirstViewController - (NSString *)timeFormatted:(int)newTotalSeconds { int seconds = totalSeconds % 60; int minutes = (totalSeconds / 60) % 60; return [NSString stringWithFormat:@"%i:%02d"], minutes, seconds; } -(IBAction)Start { newTotalSeconds = 178; //for 2:58 newTotalSeconds = newTotalSeconds-1; rocketCount.text = [self timeFormatted:newTotalSeconds]; if(timerActive == NO){ timerActive = YES; newTotalSeconds = 178; [rocketTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(timerLoop) userInfo:nil repeats:YES]; } else{ timerActive = NO; [rocketTimer invalidate]; rocketTimer = nil; } } -(void)timerLoop:(id)sender { totalSeconds = totalSeconds-1; rocketCount.text = [self timeFormatted:totalSeconds]; } - (void)dealloc { [super dealloc]; [rocketTimer release]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view from its nib. timerActive = NO; } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } @end

    Read the article

  • How to initiate chatting between two clients and two clients only, using applets and servlets?

    - by mithun1538
    Hello everyone, I first need to apologize for my earlier questions. (You can check my profile for them)They seemed to ask more questions than give answers. Hence, I am laying down the actual question that started all them absurd questions. I am trying to design a chat applet. Till now, I have coded the applet, servlet and communication between the applet and the servlet. The code in the servlet side is such that I was able to establish chatting between clients using the applets, but the code was more like a broadcast all feature, i.e. all clients would be chatting with each other. That was my first objective when I started designing the chat applet. The second step is chatting between only two specific users, much like any other chat application we have. So this was my idea for it: I create an instance of the servlet that has the 'broadcast-all' code. I then pass the address of this instance to the respective clients. 2 client applets use the address to then chat. Technically the code is 'broadcast-all', but since only 2 clients are connected to it, it gives the chatting between two clients feature. Thus, groups of 2 clients have different instances of the same servlet, and each instance handles chatting between two clients at a max. However, as predicted, the idea didn't materialize! I tried to create an instance of the servlet but the only solution for that was using sessions on the servlet side, and I don't know how to use this session for later communications. I then tried to modify my broadcast-all code. In that code, I was using classes that implemented Observer and Observable interfaces. So the next idea that I got was: Create a new object of the Observable class(say class_1). This object be common to 2 clients. 2 clients that wish to chat will use same object of the class_1. 2 other clients will use a different object of class_1. But the problem here lies with the class that implements the Observer interface(say class_2). Since this has observers monitoring the same type of class, namely class_1, how do I establish an observer monitoring one object of class_1 and another observer monitoring another object of the same class class_1 (Because notifyObservers() would notify all the observers and I can't assign a particular observer to a particular object)? I first decided to ask individual problems, like how to create instances of servlets, using objects of observable and observer and so on in stackoverflow... but I got confused even more. Can anyone give me an idea how to establish chatting between two clients only?(I am using Http and not sockets or RMI). Regards, Mithun. P.S. Thanks to all who replied to my previous (absurd) queries. I should have stated the purpose earlier so that you guys could help me better.

    Read the article

  • EXC_BAD_ACCESS when I change moviePlayer contentURL

    - by Bruno
    Hello, In few words, my application is doing that : 1) My main view (MovieListController) has some video thumbnails and when I tap on one, it displays the moviePlayer (MoviePlayerViewController) : MovieListController.h : @interface MoviePlayerViewController : UIViewController <UITableViewDelegate>{ UIView *viewForMovie; MPMoviePlayerController *player; } @property (nonatomic, retain) IBOutlet UIView *viewForMovie; @property (nonatomic, retain) MPMoviePlayerController *player; - (NSURL *)movieURL; @end MovieListController.m : MoviePlayerViewController *controllerTV = [[MoviePlayerViewController alloc] initWithNibName:@"MoviePlayerViewController" bundle:nil]; controllerTV.delegate = self; controllerTV.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController: controllerTV animated: YES]; [controllerTV release]; 2) In my moviePlayer, I initialize the video I want to play MoviePlayerViewController.m : @implementation MoviePlayerViewController @synthesize player; @synthesize viewForMovie; - (void)viewDidLoad { NSLog(@"start"); [super viewDidLoad]; self.player = [[MPMoviePlayerController alloc] init]; self.player.view.frame = self.viewForMovie.bounds; self.player.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [self.viewForMovie addSubview:player.view]; self.player.contentURL = [self movieURL]; } - (void)dealloc { NSLog(@"dealloc TV"); [player release]; [viewForMovie release]; [super dealloc]; } -(NSURL *)movieURL { NSBundle *bundle = [NSBundle mainBundle]; NSString *moviePath = [bundle pathForResource:@"FR_Tribord_Surf camp_100204" ofType:@"mp4"]; if (moviePath) { return [NSURL fileURLWithPath:moviePath]; } else { return nil; } } - It's working good, my movie is display My problem : When I go back to my main view : - (void) returnToMap: (MoviePlayerViewController *) controller { [self dismissModalViewControllerAnimated: YES]; } And I tap in a thumbnail to display again the moviePlayer (MoviePlayerViewController), I get a *Program received signal: “EXC_BAD_ACCESS”.* In my debugger I saw that it's stopping on the thread "main" : // // main.m // MoviePlayer // // Created by Eric Freeman on 3/27/10. // Copyright Apple Inc 2010. All rights reserved. // #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); //EXC_BAD_ACCESS [pool release]; return retVal; } If I comment self.player.contentURL = [self movieURL]; it's working, but when I let it, iI have this problem. I read that it's due to null pointer or memory problem but I don't understand why it's working the first time and not the second time. I release my object in dealloc method. Thanks for your help ! Bruno.

    Read the article

  • ASP.NET MVC 2 AJAX dilemma: Lose Models concept or create unmanageable JavaScript

    - by Slightly Frustrated
    Hi, Ok, let's assume we are working with ASP.NET MVC 2 (latest and greatest preview) and we want to create AJAX user interface with jQuery. So what are our real options here? Option 1 - Pass Json from the Controller to the view, and then the view submits Json back to the controller. This means (in the order given): User opens some View (let's say - /Invoices/January) which has to visualize a list of data (e.g. <IEnumerable<X.Y.Z.Models.Invoice>>) Controller retrieves the Model from the repository (assuming we are using repository pattern). Controller creates a new instance of a class which we will serialize to Json. The reasaon we do this, is because the model may not be serializable (circular reference ftl) Controller populates the soon-to-be-serialized class with data Controller serializes the class to Json and passes it the view. User does some change and submits the 'form' The View submits back Json to the controller The Controller now must 'manually' validate the input, because the Json passed does not bind to a Model See, if our View is communicating to the controller via Json, we lose the Model validation, which IMHO is incredible disadvantage. In this case, forget about data annotations and stuff. Option 2 - Ok, the alternative of the first approach is to pass the Models to the Views, which is the default behavior in the template when you start a new project. We pass a strong typed model to the view The view renders the appropriate html and javascript, sticking to the model property names. This is important! The user submits the form. If we stick to the model names, when we .serialize() the form and submit it to the controller it will map to a model. There is no Json mapping. The submitted form directly binds to a strongly typed model, hence, we can use the model validation. E.g. we keep the business logic where it should be. Problem with this approach is, if we refactor some of the Models (change property names, types, etc), the javascript we wrote would become invalid. We will have to manually refactor the scripting and hope we don't miss something. There is no way you can test it either. Ok, the question is - how to write an AJAX front end, which keeps the business logic validation in the model (e.g. controller passes and receives a Model type), but in the same time doesn't screw up the javascript and html when we refactor the model?

    Read the article

  • iPhone: Problems releasing UIViewController in a multithreaded environment

    - by bart-simpson
    Hi! I have a UIViewController and in that controller, i am fetching an image from a URL source. The image is fetched in a separate thread after which the user-interface is updated on the main thread. This controller is displayed as a page in a UIScrollView parent which is implemented to release controllers that are not in view anymore. When the thread finishes fetching content before the UIViewController is released, everything works fine - but when the user scrolls to another page before the thread finishes, the controller is released and the only handle to the controller is owned by the thread making releaseCount of the controller equals to 1. Now, as soon as the thread drains NSAutoreleasePool, the controller gets releases because the releaseCount becomes 0. At this point, my application crashes and i get the following error message: bool _WebTryThreadLock(bool), 0x4d99c60: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... The backtrace reveals that the application crashed on the call to [super dealloc] and it makes total sense because the dealloc function must have been triggered by the thread when the pool was drained. My question is, how i can overcome this error and release the controller without leaking memory? One solution that i tried was to call [self retain] before the pool is drained so that retainCount doesn't fall to zero and then using the following code to release controller in the main thread: [self performSelectorOnMainThread:@selector(autorelease) withObject:nil waitUntilDone:NO]; Unfortunately, this did not work out. Below is the function that is executed on a thread: - (void)thread_fetchContent { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSURL *imgURL = [NSURL URLWithString:@"http://www.domain.com/image.png"]; // UIImage *imgHotspot is declared as private - The image is retained // here and released as soon as it is assigned to UIImageView imgHotspot = [[[UIImage alloc] initWithData: [NSData dataWithContentsOfURL: imgURL]] retain]; if ([self retainCount] == 1) { [self retain]; // increment retain count ~ workaround [pool drain]; // drain pool // this doesn't work - i get the same error [self performSelectorOnMainThread:@selector(autorelease) withObject:nil waitUntilDone:NO]; } else { // show fetched image on the main thread - this works fine! [self performSelectorOnMainThread:@selector(showImage) withObject:nil waitUntilDone:NO]; [pool drain]; } } Please help! Thank you in advance.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • IQueryable and lazy loading

    - by Nelson
    I'm having a hard time determining the best way to handle this... With Entity Framework (and L2S), LINQ queries return IQueryable. I have read various opinions on whether the DAL/BLL should return IQueryable, IEnumerable or IList. Assuming we go with IList, then the query is run immediately and that control is not passed on to the next layer. This makes it easier to unit test, etc. You lose the ability to refine the query at higher levels, but you could simply create another method that allows you to refine the query and still return IList. And there are many more pros/cons. So far so good. Now comes Entity Framework and lazy loading. I am using POCO objects with proxies in .NET 4/VS 2010. In the presentation layer I do: foreach (Order order in bll.GetOrders()) { foreach (OrderLine orderLine in order.OrderLines) { // Do something } } In this case, GetOrders() returns IList so it executes immediately before returning to the PL. But in the next foreach, you have lazy loading which executes multiple SQL queries as it gets all the OrderLines. So basically, the PL is running SQL queries "on demand" in the wrong layer. Is there any sensible way to avoid this? I could turn lazy loading off, but then what's the point of having this "feature" that everyone was complaining EF1 didn't have? And I'll admit it is very useful in many scenarios. So I see several options: Somehow remove all associations in the entities and add methods to return them. This goes against the default EF behavior/code generation and makes it harder to do some composite (multiple entity) LINQ queries. It seems like a step backwards. I vote no. If we have lazy loading anyway which makes it hard to unit test, then go all the way and return IQueryable. You'll have more control farther up the layers. I still don't think this is a good option because IQueryable ties you to L2S, L2E, or your own full implementation of IQueryable. Lazy loading may run queries "on demand", but doesn't tie you to any specific interface. I vote no. Turn off lazy loading. You'll have to handle your associations manually. This could be with eager loading's .Include(). I vote yes in some specific cases. Keep IList and lazy loading. I vote yes in many cases, only due to the troubles with the others. Any other options or suggestions? I haven't found an option that really convinces me.

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • How can i convert this to a factory/abstract factory?

    - by Amitd
    I'm using MigraDoc to create a pdf document. I have business entities similar to the those used in MigraDoc. public class Page{ public List<PageContent> Content { get; set; } } public abstract class PageContent { public int Width { get; set; } public int Height { get; set; } public Margin Margin { get; set; } } public class Paragraph : PageContent{ public string Text { get; set; } } public class Table : PageContent{ public int Rows { get; set; } public int Columns { get; set; } //.... more } In my business logic, there are rendering classes for each type public interface IPdfRenderer<T> { T Render(MigraDoc.DocumentObjectModel.Section s); } class ParagraphRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Paragraph> { BusinessEntities.PDF.Paragraph paragraph; public ParagraphRenderer(BusinessEntities.PDF.Paragraph p) { paragraph = p; } public MigraDoc.DocumentObjectModel.Paragraph Render(MigraDoc.DocumentObjectModel.Section s) { var paragraph = s.AddParagraph(); // add text from paragraph etc return paragraph; } } public class TableRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Tables.Table> { BusinessEntities.PDF.Table table; public TableRenderer(BusinessEntities.PDF.Table t) { table =t; } public MigraDoc.DocumentObjectModel.Tables.Table Render(Section obj) { var table = obj.AddTable(); //fill table based on table } } I want to create a PDF page as : var document = new Document(); var section = document.AddSection();// section is a page in pdf var page = GetPage(1); // get a page from business classes foreach (var content in page.Content) { //var renderer = createRenderer(content); // // get Renderer based on Business type ?? // renderer.Render(section) } For createRenderer() i can use switch case/dictionary and return type. How can i get/create the renderer generically based on type ? How can I use factory or abstract factory here? Or which design pattern better suits this problem?

    Read the article

  • generated service mock: everything but RhinoMocks fails?

    - by hko
    I have the "quest" to search for the next Mocking Framework for my company, and basically it's down to NSubstitute (simplest syntax, but no strict mocks), FakeItEasy(best reviews, Roy Osherove bonus, and slightly better lib support than NSubstitute), Moq (best "other libs support", biggest featureset, downside: mock.Object). We definitely want to move on from RhinoMocks, e.g. because of the unusefull interactiontest error messages (it should tell me what the parameter was instead, when a verification fails). So I was pretty surprised the other day (that was yesterday) when I found out RhinoMocks could do a thing where every other mock framework fails at: Mocking an autogenerated SomethingService (a typical VS autogenerated service with a default construtor in a partial class). Please don't argue about the design.. I intend to write lightweight integration tests (and some unit tests), and I can't mess around with the service, the product is installed on too many customers system. See this code: // here the NSubstitute and FakeItEasy equivalents throw an exception.. see below TicketStoreService fakeTicketStoreService = MockRepository.GenerateMock<TicketStoreService>(); fakeTicketStoreService.Expect(service => service.DoSomething(Arg.Is(new Guid())).Return(new Guid()); fakeTicketStoreService.DoSomething(Arg.Is(new Guid())); fakeTicketStoreService.VerifyAllExpectations(); Note that DoSomething is a non-virtual methodcall in an autogenerated class. So it shouldn't work, according to common knowledge. But it does. Problem is that it's the only (non commercial) framework that can do this: Rhino.Mocks works, and verification works too FakeItEasy says it doesn't find a default constructor (probably just wrong exception message): No default constructor was found on the type SomeNamespace.TicketStoreService Moq gives something sane and understandable: Invalid setup on a non-virtual (overridable in VB) member: service=> service.DoSomething Nsubstitute gives a message System.NotSupportedException: Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface. I'm really wondering what's going on here with the frameworks, except Moq. The "fancy new" frameworks seem to have an initial perf hit too, probably preparing some Type cache and serializing stuff, whilst RhinoMocks somehow manages to create a very "slim" mock without recursion. I have to admit I didn't like RhinoMocks very well, but here it shines.. unfortunately. So, is there a way to get that to work with newer (non-commercial!) mocking frameworks, or somehow get a sane error message out of Rhino.Mocks? And why can Rhino.Mocks achieve this, when clearly every Mocking framework states it can only work with virtual methods when given a concrete class? Let's not derail the discussion by talking about alternative approaches like Extract&Override or runtime-proxy Mocking frameworks like JustMock/TypeMock/Moles or the new Fakes framework, I know these, but that would be less ideal solutions, for reasons beyond this topic. Any help appreciated..

    Read the article

  • What's wrong with Bundler working with RubyGems to push a Git repo to Heroku?

    - by stanigator
    I've made sure that all the files are in the root of the repository as recommended in this discussion. However, as I follow the instructions in this section of the book, I can't get through the section without the problems. What do you think is happening with my system that's causing the error? I have no clue at the moment of what the problem means despite reading the following in the log. Thanks in advance for your help! stanley@ubuntu:~/rails_sample/first_app$ git push heroku master Warning: Permanently added the RSA host key for IP address '50.19.85.156' to the list of known hosts. Counting objects: 96, done. Compressing objects: 100% (79/79), done. Writing objects: 100% (96/96), 28.81 KiB, done. Total 96 (delta 22), reused 0 (delta 0) -----> Heroku receiving push -----> Ruby/Rails app detected -----> Installing dependencies using Bundler version 1.2.0.pre Running: bundle install --without development:test --path vendor/bundle --binstubs bin/ --deployment Fetching gem metadata from https://rubygems.org/....... Installing rake (0.9.2.2) Installing i18n (0.6.0) Installing multi_json (1.3.5) Installing activesupport (3.2.3) Installing builder (3.0.0) Installing activemodel (3.2.3) Installing erubis (2.7.0) Installing journey (1.0.3) Installing rack (1.4.1) Installing rack-cache (1.2) Installing rack-test (0.6.1) Installing hike (1.2.1) Installing tilt (1.3.3) Installing sprockets (2.1.3) Installing actionpack (3.2.3) Installing mime-types (1.18) Installing polyglot (0.3.3) Installing treetop (1.4.10) Installing mail (2.4.4) Installing actionmailer (3.2.3) Installing arel (3.0.2) Installing tzinfo (0.3.33) Installing activerecord (3.2.3) Installing activeresource (3.2.3) Installing coffee-script-source (1.3.3) Installing execjs (1.3.2) Installing coffee-script (2.2.0) Installing rack-ssl (1.3.2) Installing json (1.7.3) with native extensions Installing rdoc (3.12) Installing thor (0.14.6) Installing railties (3.2.3) Installing coffee-rails (3.2.2) Installing jquery-rails (2.0.2) Using bundler (1.2.0.pre) Installing rails (3.2.3) Installing sass (3.1.18) Installing sass-rails (3.2.5) Installing sqlite3 (1.3.6) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite-devel' and check your shared library search path (the location where your sqlite3 shared library is located). *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --enable-local --disable-local Gem files will remain installed in /tmp/build_3tplrxvj7qa81/vendor/bundle/ruby/1.9.1/gems/sqlite3-1.3.6 for inspection. Results logged to /tmp/build_3tplrxvj7qa81/vendor/bundle/ruby/1.9.1/gems/sqlite3-1.3.6/ext/sqlite3/gem_make.out An error occurred while installing sqlite3 (1.3.6), and Bundler cannot continue. Make sure that `gem install sqlite3 -v '1.3.6'` succeeds before bundling. ! ! Failed to install gems via Bundler. ! ! Heroku push rejected, failed to compile Ruby/rails app To [email protected]:growing-mountain-2788.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:growing-mountain-2788.git' ------Gemfile------------------------ As requested, here's the auto-generated gemfile: source 'https://rubygems.org' gem 'rails', '3.2.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3' gem 'json' # Gems used only for assets and not required # in production environments by default. group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' # See https://github.com/sstephenson/execjs#readme for more supported runtimes # gem 'therubyracer', :platform => :ruby gem 'uglifier', '>= 1.0.3' end gem 'jquery-rails' # To use ActiveModel has_secure_password # gem 'bcrypt-ruby', '~> 3.0.0' # To use Jbuilder templates for JSON # gem 'jbuilder' # Use unicorn as the app server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug'

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Recommended textbook for machine-level programming?

    - by Norman Ramsey
    I'm looking at textbooks for an undergraduate course in machine-level programming. If the perfect book existed, this is what it would look like: Uses examples written in C or assembly language, or both. Covers machine-level operations such as two's-complement integer arithmetic, bitwise operations, and floating-point arithmetic. Explains how caches work and how they affect performance. Explains machine instructions or assembly instructions. Bonus if the example assembly language includes x86; triple bonus if it includes x86-64 (aka AMD64). Explains how C values and data structures are represented using hardware registers and memory. Explains how C control structures are translated into assembly language using conditional and unconditional branch instructions. Explains something about procedure calling conventions and how procedure calls are implemented at the machine level. Books I might be interested in would probably have the words "machine organization" or "computer architecture" in the title. Here are some books I'm considering but am not quite happy with: Computer Systems: A Programmer's Perspective by Randy Bryant and Dave O'Hallaron. This is quite a nice book, but it's a book for a broad, shallow course in systems programming, and it contains a great deal of material my students don't need. Also, it is just out in a second edition, which will make it expensive. Computer Organization and Design: The Hardware/Software Interface by Dave Patterson and John Hennessy. This is also a very nice book, but it contains way more information about how the hardware works than my students need. Also, the exercises look boring. Finally, it has a show-stopping bug: it is based very heavily on MIPS hardware and the use of a MIPS simulator. My students need to learn how to use DDD, and I can't see getting this to work on a simulator. Not to mention that I can't see them cross-compiling their code for the simulator, and so on and so forth. Another flaw is that the book mentions the x86 architecture only to sneer at it. I am entirely sympathetic to this point of view, but news flash! You guys lost! Write Great Code Vol I: Understanding the Machine by Randall Hyde. I haven't evaluated this book as thoroughly as the other two. It has a lot of what I need, but the translation from high-level language to assembler is deferred to Volume Two, which has mixed reviews. My students will be annoyed if I make them buy a two-volume series, even if the price of those two volumes is smaller than the price of other books. I would really welcome other suggestions of books that would help students in a class where they are to learn how C-language data structures and code are translated to machine-level data structures and code and where they learn how to think about performance, with an emphasis on the cache.

    Read the article

  • jQuery.extend() not giving deep copy of object formed by constructor

    - by two7s_clash
    I'm trying to use this to clone a complicated Object. The object in question has a property that is an array of other Objects, and each of these have properties of different types, mostly primitives, but a couple further Objects and Arrays. For example, an ellipsed version of what I am trying to clone: var asset = new Assets(); function Assets() { this.values = []; this.sectionObj = Section; this.names = getNames; this.titles = getTitles; this.properties = getProperties; ... this.add = addAsset; function AssetObj(assetValues) { this.name = ""; this.title = ""; this.interface = ""; ... this.protected = false; this.standaloneProtected = true; ... this.chaptersFree = []; this.chaptersUnavailable = []; ... this.mediaOptions = { videoWidth: "", videoHeight: "", downloadMedia: true, downloadMediaExt: "zip" ... } this.chaptersAvailable = []; if (typeof assetValues == "undefined") { return; } for (var name in assetValues) { if (typeof assetValues[name] == "undefined") { this[name] = ""; } else { this[name] = assetValues[name]; } } ... function Asset() { return new AssetObj(); } ... function getProperties() { var propertiesArray = new Array(); for (var property in this.values[0]) { propertiesArray.push(property); } return propertiesArray; } ... function addAsset(assetValues) { var newValues; newValues = new AssetObj(assetValues); this.values.push(newValues); } } When I do var copiedAssets = $.extend(true, {}, assets); copiedAssets.values == [], while assets.values == [Object { name="section_intro", more...}, Object { name="select_textbook", more...}, Object { name="quiz", more...}, 11 more...] When I do var copiedAssets = $.extend( {}, assets); all copiedAssets.values.[X].properties are just pointers to the value in assets. What I want is a true deep copy all the way down. What am I missing? Do I need to write a custom extend function? If so, any recommended patterns?

    Read the article

  • Strange behavior of move with strings

    - by Umair Ahmed
    I am testing some enhanced string related functions with which I am trying to use move as a way to copy strings around for faster, more efficient use without delving into pointers. While testing a function for making a delimited string from a TStringList, I encountered a strange issue. The compiler referenced the bytes contained through the index when it was empty and when a string was added to it through move, index referenced the characters contained. Here is a small downsized barebone code sample:- unit UI; interface uses System.SysUtils, System.Types, System.UITypes, System.Rtti, System.Classes, System.Variants, FMX.Types, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.Layouts, FMX.Memo; type TForm1 = class(TForm) Results: TMemo; procedure FormCreate(Sender: TObject); end; var Form1: TForm1; implementation {$R *.fmx} function StringListToDelimitedString ( const AStringList: TStringList; const ADelimiter: String ): String; var Str : String; Temp1 : NativeInt; Temp2 : NativeInt; DelimiterSize : Byte; begin Result := ' '; Temp1 := 0; DelimiterSize := Length ( ADelimiter ) * 2; for Str in AStringList do Temp1 := Temp1 + Length ( Str ); SetLength ( Result, Temp1 ); Temp1 := 1; for Str in AStringList do begin Temp2 := Length ( Str ) * 2; // Here Index references bytes in Result Move ( Str [1], Result [Temp1], Temp2 ); // From here the index seems to address characters instead of bytes in Result Temp1 := Temp1 + Temp2; Move ( ADelimiter [1], Result [Temp1], DelimiterSize ); Temp1 := Temp1 + DelimiterSize; end; end; procedure TForm1.FormCreate(Sender: TObject); var StrList : TStringList; Str : String; begin // Test 1 : StringListToDelimitedString StrList := TStringList.Create; Str := ''; StrList.Add ( 'Hello1' ); StrList.Add ( 'Hello2' ); StrList.Add ( 'Hello3' ); StrList.Add ( 'Hello4' ); Str := StringListToDelimitedString ( StrList, ';' ); Results.Lines.Add ( Str ); StrList.Free; end; end. Please devise a solution and if possible, some explanation. Alternatives are welcome too.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >