Search Results

Search found 3987 results on 160 pages for 'captain obvious'.

Page 128/160 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • DataGridView with row-specific DataGridViewComboBoxColumn contents

    - by XXXXX
    So I have something like the following data structure (constructors omitted) class Child { public string Name { get; set; } public int Age { get; set; } } class Parent { public string Name { get; set; } public List <Child> Children { get; private set; } // never null; list never empty public Child FavoriteChild { get; set; } // never null; always a reference to a Child in Children } List < Parent > Parents; What I want to do is show a DataGridView where each row is a Parent from Parent list. Each row should have two columns: a text box showing the parent's name and a DataGridViewComboBoxColumn containing that parent's children, from which the user can select the parent's favorite child. I suppose I could do the whole thing manually, but I'd like to do all this with more-or-less standard Data Binding. It's easy enough to bind the DataGridView to the list of parents, and easy enough to bind the selected child to the FavoriteChild property. The part that's giving me difficulty is that it looks like the Combo Box column wants to bind to one data source for all the combo-box's contents on all rows. I'd like each instance of the combo box to bind to the list of each parent's children. I'm fairly new to C#/Windows Forms, so I may well be missing something obvious. Or it could be that "you can't get there from here." It's not too tough to make a separate list of all the children and filter it by parent; I'm looking into that possibility right now. Is this feasible, or is there a better way?

    Read the article

  • iphone nsarray problem?

    - by Brodie4598
    Okay maybe i just need another set of eyes on this, but I have the following lines of code in one of my view controllers. It takes some data from a file, and populates it into an array using "\n" as a separator. I then use that array to make an NSDictionary, which is used to populate a tableview. It's very simple. However it isnt working. Here's the code: NSString *dataString = [NSString stringWithContentsOfFile:checklistPath encoding: NSUTF8StringEncoding error:NULL]; if ([dataString hasPrefix:@"\n"]) { dataString = [dataString substringFromIndex:1]; } NSArray *tempArray = [dataString componentsSeparatedByString:@"\n"]; NSLog(@"datastring:%@",dataString); NSLog(@"temp array:",tempArray); NSLog(@"%i",[tempArray count]); NSDictionary *temporaryDictionary = [NSDictionary dictionaryWithObject: tempArray forKey:@"User Generated Checklist"]; self.names = temporaryDictionary; NSLog(@"names:%@",names); so in the log, datastring is correct, so it's correctly pulling the data from a file. however for tempArray, i get: 2010-05-17 19:15:55.825 MyApp[7309:207] temp array: for the tempArray count i get: 2010-05-17 19:15:55.826 myApp[7309:207] 5 which is the correct number of strings in the array So i'm stumped. I have the EXACT same few lines of code in a different view controller and it works perfectly. Whats crazier is the last NSLog, that shows the final NSDictionary (names) displays this, which looks correct: 2010-05-17 19:15:55.827 FS Companion[7309:207] names:{ "User Generated Checklist" = ( "System|||ACTION", "System|||ACTION", "System|||ACTION", "System|||ACTION", "System|||ACTION" ); \ am i missing something really obvious??

    Read the article

  • Bug with DataBinding in WPF Host in Winforms?

    - by Tigraine
    Hi Guys, I've spent far too much time with this and can't find the mistake. Maybe I'm missing something very obvious or I may have just found a bug in the WPF Element Host for Winforms. I am binding a ListView to a ObeservableList that lives on my ProductListViewModel. I'm trying to implement searching for the ListView with the general Idea to just change the ObservableList with a new list that is filtered. Anyway, the ListView Binding code looks like this: <ListView ItemsSource="{Binding Path=Products}" SelectedItem="{Binding Path=SelectedItem}" SelectionMode="Single"> <ListView.ItemContainerStyle> <Style TargetType="{x:Type ListViewItem}"> <Setter Property="IsSelected" Value="{Binding IsSelected, Mode=TwoWay}"></Setter> </Style> </ListView.ItemContainerStyle> <ListView.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"></TextBlock> </DataTemplate> </ListView.ItemTemplate> </ListView> And the ViewModel code is as vanilla as it can get: private ObservableCollection<ProductViewModel> products; public ObservableCollection<ProductViewModel> Products { get { return products; } private set { if (products != value) { products = value; OnPropertyChanged("Products"); } } } Now the problem here: Once I debug into my OnPropertyChanged method, I can see that there are no subscribers to the PropertyChanged event (it's null), so nothing happens on the UI.. I already tried Mode=TwoWay and other Binding modes, it seems I can't get the ListView to subscribe to the ItemsSource... Can anyone help me with this? I'm just about to forget about the ElemenHost and just do it in Winforms greetings Daniel

    Read the article

  • Java Constructor Style (Check parameters aren't null)

    - by Peter
    What are the best practices if you have a class which accepts some parameters but none of them are allowed to be null? The following is obvious but the exception is a little unspecific: public class SomeClass { public SomeClass(Object one, Object two) { if (one == null || two == null) { throw new IllegalArgumentException("Parameters can't be null"); } //... } } Here the exceptions let you know which parameter is null, but the constructor is now pretty ugly: public class SomeClass { public SomeClass(Object one, Object two) { if (one == null) { throw new IllegalArgumentException("one can't be null"); } if (two == null) { throw new IllegalArgumentException("two can't be null"); } //... } Here the constructor is neater, but now the constructor code isn't really in the constructor: public class SomeClass { public SomeClass(Object one, Object two) { setOne(one); setTwo(two); } public void setOne(Object one) { if (one == null) { throw new IllegalArgumentException("one can't be null"); } //... } public void setTwo(Object two) { if (two == null) { throw new IllegalArgumentException("two can't be null"); } //... } } Which of these styles is best? Or is there an alternative which is more widely accepted? Cheers, Pete

    Read the article

  • Drawing an image in Java, slow as hell on a netbook.

    - by Norswap
    In follow-up to my previous questions (especially this one : http://stackoverflow.com/questions/2684123/java-volatileimage-slower-than-bufferedimage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long. (*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0 How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy). Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking. I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ? PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?

    Read the article

  • Optimizing a bin-placement algorithm

    - by user258651
    Alright, I've got two collections, and I need to place elements from collection1 into the bins (elements) of collection2, based on whether their value falls within a given bin's range. For a concrete example, assume I have a sorted collection objects (bins) which have an int range ([1...4], [5..10], etc). I need to determine the range an int falls in, and place it in the appropriate bin. foreach(element n in collection1) { foreach(bin m in collection2) { if (m.inRange(n)) { m.add(n); break; } } } So the obvious NxM complexity algorithm is there, but I really would like to see Nxlog(M). To do this I'd like to use BinarySearch in place of the inner foreach loop. To use BinarySearch, I need to implement an IComparer class to do the searching for me. The problem I'm running into is this approach would require me to make an IComparer.Compare function that compares two different types of objects (an element to its bin), and that doesn't seem possible or correct. So I'm asking, how should I write this algorithm?

    Read the article

  • iPhone shooter game bullet physics!

    - by user298261
    Hello, Making a new shooter game here in the vein of "Galaga" (my fav shooter game growing up). Here's the code I have for bullet physics: -(IBAction)shootBullet:(id)sender{ imgBullet.hidden = NO; timer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(fireBullet) userInfo:Nil repeats:YES]; } -(void)fireBullet{ imgBullet.center = CGPointMake(imgBullet.center.x + bulletVelocity.x , imgBullet.center.y + bulletVelocity.y); if(imgBullet.center.y <= 0){ imgBullet.hidden = YES; imgBullet.center = self.view.center; [timer invalidate]; } } Anyway, the obvious issue is that once the bullet leaves the screen, its center is being reset, so I'm reusing the same bullet for each press of the "fire" button. Ideally, I would like the user to be able to spam the "fire" button without causing the program to crash. How would I tinker this existing code so that a bullet object would spawn on the button press each time, and then despawn after it exits the screen, or collides with an enemy? Thank you for any assistance you can offer!

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • What exactly can "Full Control" with SharePoint Designer accomplish?

    - by Brian L.
    I've been brought in as an intern to develop a SharePoint site. My team won't authorize the budget for Visual Studio and I don't have physical or remote access to the SharePoint server (running Windows SharePoint Services 3.0 a.k.a. WSS) on the back-end. So what exactly can I do? I'm a pretty decent programmer when it comes to web technologies like PHP, JS and the obvious HTML and CSS. In an environment like this locked-down SharePoint though, I'm stumped trying to figure out how much control I have with MS's definition of "Full Control". If I figured out a way to write some C#, I'm pretty sure I could handle my own, but as I said no Visual Studio for me. Any good ideas of features that people will use on a site built with the limited functionality of WSS and SharePoint Designer with "Full Control"? Can I somehow manipulate the default Web Parts into something cool or useful? Are there Ajax tricks I can do to accomplish something on the back-end? Thanks in advance, I'm new to StackOverflow and very anxious to get involved here!

    Read the article

  • Detect a USB drive being inserted - Windows Service

    - by Tom Bell
    I am trying to detect a USB disk drive being inserted within a Windows Service, I have done this as a normal Windows application. The problem is the following code doesn't work for volumes. Registering the device notification: DEV_BROADCAST_DEVICEINTERFACE notificationFilter; HDEVNOTIFY hDeviceNotify = NULL; ::ZeroMemory(&notificationFilter, sizeof(notificationFilter)); notificationFilter.dbcc_size = sizeof(DEV_BROADCAST_DEVICEINTERFACE); notificationFilter.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE; notificationFilter.dbcc_classguid = ::GUID_DEVINTERFACE_VOLUME; hDeviceNotify = ::RegisterDeviceNotification(g_serviceStatusHandle, &notificationFilter, DEVICE_NOTIFY_SERVICE_HANDLE); The code from the ServiceControlHandlerEx function: case SERVICE_CONTROL_DEVICEEVENT: PDEV_BROADCAST_HDR pBroadcastHdr = (PDEV_BROADCAST_HDR)lpEventData; switch (dwEventType) { case DBT_DEVICEARRIVAL: ::MessageBox(NULL, "A Device has been plugged in.", "Pounce", MB_OK | MB_ICONINFORMATION); switch (pBroadcastHdr->dbch_devicetype) { case DBT_DEVTYP_DEVICEINTERFACE: PDEV_BROADCAST_DEVICEINTERFACE pDevInt = (PDEV_BROADCAST_DEVICEINTERFACE)pBroadcastHdr; if (::IsEqualGUID(pDevInt->dbcc_classguid, GUID_DEVINTERFACE_VOLUME)) { PDEV_BROADCAST_VOLUME pVol = (PDEV_BROADCAST_VOLUME)pDevInt; char szMsg[80]; char cDriveLetter = ::GetDriveLetter(pVol->dbcv_unitmask); ::wsprintfA(szMsg, "USB disk drive with the drive letter '%c:' has been inserted.", cDriveLetter); ::MessageBoxA(NULL, szMsg, "Pounce", MB_OK | MB_ICONINFORMATION); } } return NO_ERROR; } In a Windows application I am able to get the DBT_DEVTYP_VOLUME in dbch_devicetype, however this isn't present in a Windows Service implementation. Has anyone seen or heard of a solution to this problem, without the obvious, rewrite as a Windows application?

    Read the article

  • How should I secure my webapp written using Wicket, Spring, and JPA?

    - by Martin
    So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations. Here are the options so far: Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old. Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of @Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory. SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example. Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.

    Read the article

  • Grails - Removing an item from a hasMany association List on data bind?

    - by ecrane
    Grails offers the ability to automatically create and bind domain objects to a hasMany List, as described in the grails user guide. So, for example, if my domain object "Author" has a List of many "Book" objects, I could create and bind these using the following markup (from the user guide): <g:textField name="books[0].title" value="the Stand" /> <g:textField name="books[1].title" value="the Shining" /> <g:textField name="books[2].title" value="Red Madder" /> In this case, if any of the books specified don't already exist, Grails will create them and set their titles appropriately. If there are already books in the specified indices, their titles will be updated and they will be saved. My question is: is there some easy way to tell Grails to remove one of those books from the 'books' association on data bind? The most obvious way to do this would be to omit the form element that corresponds to the domain instance you want to delete; unfortunately, this does not work, as per the user guide: Then Grails will automatically create a new instance for you at the defined position. If you "skipped" a few elements in the middle ... Then Grails will automatically create instances in between. I realize that a specific solution could be engineered as part of a command object, or as part of a particular controller- however, the need for this functionality appears repeatedly throughout my application, across multiple domain objects and for associations of many different types of objects. A general solution, therefore, would be ideal. Does anyone know if there is something like this included in Grails?

    Read the article

  • Using group_by with fields_for and accepts_nested_attributes_for

    - by Derek
    I have a the following rails models: class Release < ActiveRecord::Base has_many :release_questionnaires, :dependent => :destroy accepts_nested_attributes_for :release_questionnaires ... end class class ReleaseQuestionnaire < ActiveRecord::Base belongs_to :release belongs_to :milestone ... end class In my view code, I have the following form. <% form_for @release, ... do |f| %> ... <table class="questionnaires"> <% f.fields_for :release_questionnaires, @release.release_questionnaires.sort_by{|ra| ra.questionnaire.name} do |builder| %> ... <% end %> </table> <% end %> This works and allows me to view and edit the questionnaires as desired. However, I have an additional requirement to break the questionnaires out into their own tables grouped by the milestone they are associated to, rather than in a single table. It appears as though the group_by method is design to accomplish this, but I cannot get it to work as desired inside the tag. It may be that I'm missing something obvious, as I am a beginner... Any help is appreciated.

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • error C2146: syntax error : missing ';' before identifier 'g_App'

    - by numerical25
    I wish c++, was a little more specific on the messages they give. The following error is being thrown in the document below main.h #ifndef main_h #define main_h //includes #include <windows.h> #include <commctrl.h> #include <d3d9.h> #include <fstream> #include "capplication.h" //constants #define TITLE "D3D Tut 01: Create Window" #define WINDOW_X 350 #define WINDOW_Y 320 //Button ID's #define ID_START 1 #define ID_CANCEL 2 //globals extern CApplication g_App; //function prototypes LRESULT CALLBACK WindowProcedure(HWND,UINT,WPARAM,LPARAM); #endif The only header file that could possible throw this error is the capplication.h. given below capplication.h #ifndef capplication_h #define capplication_h #include"main.h" class CApplication { public: CApplication(void); ~CApplication(void); void InitWindow(void); void SaveSettings(void); void LoadSettings(void); void KillWindow(void); inline bool GetWindowStatus(void) { return m_bRunningWindow; } inline HWND GetWindowHandle(void) { return m_hWindow; } inline void SetWindowStatus(bool bRunningWindow) { m_bRunningWindow = bRunningWindow; } private: bool m_bRunningWindow; HWND m_hWindow, m_hBtnStart, m_hBtnCancel, m_hLblResolution, m_hCbResolution, m_hLblBackBuffer, m_hCbBackBuffer, m_hLblDepthStencil, m_hCbDepthStencil, m_hLblVertexProcessing, m_hCbVertexProcessing, m_hLblMultiSampling, m_hCbMultiSampling, m_hLblAnisotropy, m_hCbAnisotropy; DWORD m_dwWidth, m_dwHeight, m_dwVertexProcessing, m_dwAnisotropy; D3DFORMAT m_ColorFormat, m_DepthStencilFormat; D3DMULTISAMPLE_TYPE m_MultiSampling; }; #endif Besides that, the only suspicious thing I see is fstream given in the first code. I did have it as fstream.h But VC++ was not recognizing it so I was told to remove the h and I did. now I am down to this error. and I have no clue what it could be. Possibly something obvious

    Read the article

  • Integer Extensions - 1st, 2nd, 3rd etc [closed]

    - by David Schiefer
    Possible Duplicate: NSNumberFormatter and ‘th’ ‘st’ ‘nd’ ‘rd’ (ordinal) number endings Hello, I'm building an application that downloads player ranks and displays them. So say for example, you're 3rd out of all the players, I inserted a condition that will display it as 3rd, not 3th and i did the same for 2nd and 1st. When getting to higher ranks though, such as 2883rd, it'll display 2883th (for obvious reasons) My question is, how can I get it to reformat the number to XXX1st, XXX2nd, XXX3rd etc? To show what I mean, here's how I format my number to add a "rd" if it's 3 if ([[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]] isEqualToString:@"3"]) { NSString*badge = [NSString stringWithFormat:@"%@rd",[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; NSString*scoreText = [NSString stringWithFormat:@"ROC Server Rank: %@rd",[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; profile.badgeValue = badge; rank.text = scoreText; } I can't do this for every number up to 2000 (there are 2000 ranks in total) - what can I do to solve this problem?

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • C#; On casting to the SAME class that came from another assembly

    - by G. Stoynev
    For complete separation/decoupling, I've implemented a DAL in an assebly that is simply being copied over via post-build event to the website BIN folder. The website then on Application Start loads that assembly via System.Reflection.Assembly.LoadFile. Again, using reflection, I construct a couple of instances from classes in that assembly. I then store a reference to these instances in the session (HttpContext.Current.Items) Later, when I try to get the object stored in the session, I am not able to cast them to their own types (was trying interfaces initially, but for debugging tried to cast to THEIR OWN TYPES), getting this error: [A]DAL_QSYSCamper.NHibernateSessionBuilder cannot be cast to [B] DAL_QSYSCamper.NHibernateSessionBuilder. Type A originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'Default' at location 'C:\Users\dull.anomal\AppData\Local\Temp\Temporary ASP.NET Files\root\ad6e8bff\70fa2384\assembly\dl3\aaf7a5b0\84f01b09_b10acb01\DAL_QSYSCamper.DLL'. Type B originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\dull.anomal\Documents\Projects\QSYS\Deleteme\UI\MVCClient\bin\DAL_QSYSCa mper.DLL'. This is happening while debugging in VS - VS manages to stop into the source DAL project even though I've loaded from assembly and the project is not refferenced by the website project (they're both in the solution). I do understand the error, but I don't understand how and why the assembly is being used/loaded from two locations - I only load it once from the file and there's no referrence to the project. Should mention that I also use Windsor for DI. The object that tries to extract the object from the session is A) from a class from that DAL assembly; B) is injected into a website class by Windsor. I will work on adding some sample code to this question, but wanted to put it out in case it's obvious what I do wrong.

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • Is there really such a thing as "being good at math"?

    - by thezhaba
    Aside from gifted individuals able to perform complex calculations in their head, I'm wondering if proficiency in mathematics, namely calculus and algebra, has really got to do with one's natural inclination towards sciences, if you can put it that way. A number of students in my calculus course pick up material in seemingly no time whereas I, personally, have to spend time thinking about and understanding most concepts. Even then, if a question that requires a bit more 'imagination' comes up I don't always recognize the concepts behind it, as is the case with calculus proofs, for instance. Nevertheless, I refuse to believe that I'm simply not made for it. I do very well in programming and software engineering courses where a lot of students struggle. At first I could not grasp what they found to be so difficult, but eventually I realized that having previous programming experience is a great asset -- once I've seen and made practical use of the programming concepts learning about them in depth in an academic setting became much easier as I have then already seen their use "in the wild". I suppose I'm hoping that something similar happens with mathematics -- perhaps once the practical idea behind a concept (which authors of textbooks sure do a great job of concealing..) is evident, understanding the seemingly dry and symbolic ideas and proofs would be more obvious? I'm really not sure. All I'm sure of is I'd like to get better at calculus, but I don't yet understand why some of us pick it up easily while others have to spend considerable amounts of time on it and still not have complete understanding if an unusual problem is given.

    Read the article

  • gcc -finline-functions behaviour?

    - by user176168
    I'm using gcc with the -finline-functions optimization for release builds. In order to combat code bloat because I work on an embedded system I want to say don't inline particular functions. The obvious way to do this would be through function attributes ie attribute(noinline). The problem is this doesn't seem to work when I switch on the global -finline-functions optimisation which is part of the -O3 switch. It also has something to do with it being templated as a non templated version of the same function doesn't get inlined which is as expected. Has anybody any idea of how to control inlining when this global switch is on? Here's the code: #include <cstdlib> #include <iostream> using namespace std; class Base { public: template<typename _Type_> static _Type_ fooT( _Type_ x, _Type_ y ) __attribute__ (( noinline )); }; template<typename _Type_> _Type_ Base::fooT( _Type_ x, _Type_ y ) { asm(""); return x + y; } int main(int argc, char *argv[]) { int test = Base::fooT( 1, 2 ); printf( "test = %d\n", test ); system("PAUSE"); return EXIT_SUCCESS; }

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >