Search Results

Search found 10859 results on 435 pages for 'raid controller'.

Page 124/435 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • iphone: Implement delegate in class

    - by Nic Hubbard
    I am trying to call up a modal table view controller using presentModalViewController but I am not sure what to do about the delegate. The following code gives me an error: MyRidesListView *controller = [[MyRidesListView alloc] init]; controller.delegate = self; [self presentModalViewController:controller animated:YES]; [controller release]; Error: Request for member 'delegate' is something not a structure or union Now, I realized there is no delegate property in my MyRidesListView class. So, how would I add a reference to my delegate there? What am I missing here?

    Read the article

  • [Using $this when not in object context in] php error

    - by JasonS
    I have solved this problem, I just need to know what to do. I get the above error because I just realised that the class is being run as class::function($values) instead of class-function($values). Does anyone know how to convert this function to instantiate the class then run the function with values? private function _load($values=null) { define('LOADED_CONTROLLER', $this->controller); define('LOADED_FUNCTION', $this->function); $function = $this->function; $controller = new $this->controller; ($values == null) ? $controller->$function() : call_user_func_array(array($this->controller, $function), $values); }

    Read the article

  • Ruby On Rails Routes

    - by Kezzer
    I can't figure out how to get the following routes. Here's an extract from my routes.rb file: map.resources :treatments map.root :controller => "home" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' map.connect ':action', :controller => 'home' # replaces the need to manually map pure actions to a default controller map.resources :bookings map.resource :dashboard map.resource :home Now I do realise that the ordering matters, but I can't seem to get them to work correctly. What I want is so http://localhost:3000/bookings/new actually takes you to an action http://localhost:3000/bookings/signmeup if you're either not signed in, or haven't got a login. The problem is that if I change my routes around, when I attempt to create a new booking after I have logged in, then it doesn't POST the form submission and just takes me back to the view page. This is definitely because of the routes as if I rearrange map.resources :bookings to be before all of them, then it works. Any ideas?

    Read the article

  • Defautlt Contoller in CodeIgniter

    - by gregavola
    Hello everyone, I am wondering if there is any other configuration options for a default controller. For example - if I have a controller called "site" and I set the default controller in the following file: application/config/routes.php to: $route['default_controller'] = "site"; I should be able to go to http://localhost and that brings up the index(); function in the site controller. However, if I try to do go to http://localhost/index.php/index2 to load the index2(); function I get a 404 error. If i change the URL to http://localhost/index.php/site/index2 it works fine - but I thought already set the default controller. Is there any way around this? Any thoughts?

    Read the article

  • How to I initialize a class with an interface button clicked? [Objective - C]

    - by seaworthy
    I am having a problem with figuring out how to initialize a class with button clicked. The code is listed below the line I have a problem with has "HELP NEEDED HERE" comment above it. // // Controller.m // #import "Controller.h" @implementation Controller - (id)init { self = [super init]; if(self){ numberTotal = 0; //HELP NEEDED HERE [self btnScore_Clicked:(id)sender]; } return self;} - (IBAction) btnScore_Clicked:(id)sender { numberTotal += 1; NSLog(@"Number Total: %d",numberTotal); } - (void)dealloc { [super dealloc]; } @end // // Controller.h // #import <UIKit/UIKit.h> @interface Controller : UIViewController { NSInteger numberTotal; } - (IBAction) btnScore_Clicked:(id)sender; @end Thanks!

    Read the article

  • Default Contoller in CodeIgniter

    - by gregavola
    I am wondering if there is any other configuration options for a default controller. For example - if I have a controller called "site" and I set the default controller in the following file: application/config/routes.php to: $route['default_controller'] = "site"; I should be able to go to http://localhost and that brings up the index(); function in the site controller. However, if I try to do go to http://localhost/index.php/index2 to load the index2(); function I get a 404 error. If I change the URL to http://localhost/index.php/site/index2 it works fine - but I thought already set the default controller. Is there any way around this?

    Read the article

  • How to fast rendering UITableView

    - by pubudu
    In my program has two view controller. first one has one button.and second one has tableview with custom cell. in this cell has 5 textviews. when i click button of first tableview.it shows second view controller. Its is very slow rendering table view with 5 , 6 rows.it is working well with simulator.but it is very slow with actual i pad. when i click the button i have to wait 2,3 second with button pressed status.and after it view the second view controller it also very slow rendering.i can see it render rows. [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; this one also i used.when i comment this table from my second view.it navigate first view controller to second view controller very fast. how can i solve this issue?

    Read the article

  • MVC Portable Areas &ndash; Static Files as Embedded Resources

    - by Steve Michelotti
    This is the third post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources In the last post, I walked through a convention for managing static files.  In this post I’ll discuss another approach to manage static files (e.g., images, css, js, etc.).  With this approach, you *also* compile the static files as embedded resources into the assembly similar to the *.aspx pages. Once again, you can set this to happen automatically by simply modifying your *.csproj file to include the desired extensions so you don’t have to remember every time you add a file: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx;**\*.gif;**\*.css;**\*.js" /> 4: </ItemGroup> 5: </Target> We now need a reliable way to serve up these static files that are embedded in the assembly. There are a couple of ways to do this but one way is to simply create a Resource controller whose job is dedicated to doing this. 1: public class ResourceController : Controller 2: { 3: public ActionResult Index(string resourceName) 4: { 5: var contentType = GetContentType(resourceName); 6: var resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName); 7:   8: return this.File(resourceStream, contentType); 9: return View(); 10: } 11:   12: private static string GetContentType(string resourceName) 13: { 14: var extention = resourceName.Substring(resourceName.LastIndexOf('.')).ToLower(); 15: switch (extention) 16: { 17: case ".gif": 18: return "image/gif"; 19: case ".js": 20: return "text/javascript"; 21: case ".css": 22: return "text/css"; 23: default: 24: return "text/html"; 25: } 26: } 27: } In order to use this controller, we need to make sure we’ve registered the route in our portable area registration (shown in lines 5-6): 1: public class WidgetAreaRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "widget1/resource/{resourceName}", 6: new { controller = "Resource", action = "Index" }); 7:   8: context.MapRoute("Widget1", "widget1/{controller}/{action}", new 9: { 10: controller = "Home", 11: action = "Index" 12: }); 13:   14: RegisterTheViewsInTheEmbeddedViewEngine(GetType()); 15: } 16:   17: public override string AreaName 18: { 19: get { return "Widget1"; } 20: } 21: } In my previous post, we relied on a custom Url helper method to find the actual physical path to the static file like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! However, since we are now embedding the files inside the assembly, we no longer have to worry about the physical path. We can change this line of code to this: 1: <img src="<%: Url.Resource("Widget1.images.arrow.gif") %>" /> Hello World! Note that I had to fully quality the resource name (with namespace and physical location) since that is how .NET assemblies store embedded resources. I also created my own Url helper method called Resource which looks like this: 1: public static string Resource(this UrlHelper urlHelper, string resourceName) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: return urlHelper.Action("Index", "Resource", new { resourceName = resourceName, area = areaName }); 5: } This method gives us the convenience of not having to know how to construct the URL – but just allowing us to refer to the resource name. The resulting html for the image tag is: 1: <img src="/widget1/resource/Widget1.images.arrow.gif" /> so we can always request any image from the browser directly. This is almost analogous to the WebResource.axd file but for MVC. What is interesting though is that we can encapsulate each one of these so that each area can have it’s own set of resources and they are easily distinguished because the area name is the first segment of the route. This makes me wonder if something like this ResourceController should be baked into portable areas itself. I’m definitely interested in anyone has any opinions on it or have taken alternative approaches.

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • SAS vs Near-line SAS vs SATA

    - by David
    I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent. The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that? "Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper). Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that work out now? And what's the deal with Near-line SATA? I apologise about the rambling tone in this message, it's 5am and I haven't slept much.

    Read the article

  • EFI vs MBR - Installing Windows Server 2008 R2 or 2012 on 8TB

    - by Riaan de Lange
    I'm having some difficulty installing Windows Server 2008 R2 and Windows Server 2012 on an Intel Server platform. The server specs is as follows: Intel Grizzly Pass Server System - R2308GZ4GC 2x Intel Xeon 2620 - 2.0 GHZ - BX80621E52620 132 GB of Memory REG-DIMM - TS1GKR72V6H 4x Seagate Constellation ES 2TB 3.5" 7200rpm 6GB/S - ST32000645NS Intel Big Laurel 4CH 6G SAS RAID 512MB - RS2BL040 On the Intel RAID Controller Setup, I have setup the HDD to be in RAID-0 - for testing purposes. (Ultimately configured in RAID-5) So, the total size of HDD space I can use is 7.6 TB something... When I install the Server OS's, they don't seem to go beyond 2 TB (1.76 TB) I have read up on EFI and UEFI boot, and this seems to work in 2012, but I could not install any drivers for the motherboard... So, I also tried EFI for 2008R2, and this worked while installing the OS, it did not however work with the Windows Boot Manager option in the BIOS. It kept on freezing once it tries to load the partition. My idea was to allocate the complete 8 TB for the OS, and load a few VM's on there. I have now started with a new approach where I'll have a 256 GB OS Partition, and a secondary 7.5 TB Data partition. Oh, and I also did a diskpart - convert disk to gpt whilst installing 2008R2. The whole disk was accessible, 7.6TB Can anyone please clarify that EFI/UEFI is meant for larger boot volumes? Bigger than 2TB. If I were to have an ideal situation where my OS is run on a SSD, 256GB, and I can attach the 8 TB drives as normal disk to the OS? I'm I correct in saying that if I wanted to boot from a 8TB partition, I would need to force the BIOS to boot from EFI? The limit for MBR is 2 TB as far as I know now... *FYI: The motherboard is EFI-ready

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • SSD/HDD not exceeding 120 MB/s

    - by skiwi
    SO here is the situation: First this was my old PC, it had a 2x 1TB RAID 0 and a Corsair Force 3 SSD in it. This were the old speeds, measured by HDTune Pro. 2x 1TB RAID 0: Corsair Force 3 SSD Then my dad got my PC and we had several issues, in the end turned out both RAID and SSD controller were malfunctioning causing BlueScreens on 100% load. Removed the RAID 0, but leaving the HDD's intact and bought an Samsung 840 EVO 120GB, though the Corsair SSD is still in the system, just not as sytem disk anymore. 1TB HDD (one of them): Corsair SSD: Samsung SSD: We did not assemble the PC ourselves, so answering some technical questions might be more difficult, though we will do our best. First thing we noticed is that the Samsung 840 EVO is no where reaching it's advertised speed, even an Samsung 840 250GB (non-EVO) is reaching 350 MB/s in my own PC. Then we noticed that both SSD's are capped at 120 MB/s exactly, not sure if this is being caused by HDTune Pro, but very unlikely. And even worse, the Corsair Forza 3 was running faster before the system got reassembled. Does anyone have any clue what is going on?

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • Data recovery on working hard drive

    - by emgee
    So I have a 5 bay hot swap SATA enclosure that's connected to a Silicon Image-based SATA adapter in a computer. It's running XP Pro. There are two 1.5TB hard drives in slots 1 and 2 respectively, set up using RAID 1 using the the Silicon Image utility. There are also two 1TB drives in bays 3 and 4, also set to RAID 1 the same way. The partitions for both RAID arrays are Dynamic partitions. A few days back, there was a bare hard drive that needed some files copied off of, so it was popped it in bay 5, that bay to pass-through, and the copied data off of it. Later, I noticed that my 1.5TB drives no longer showed up in windows. In the Silicon Image utility, the drives showed up fine, no error. However, in Device Manager, it shows the RAID 1 array as uninitialized. It shows up as the right size, etc., but nothing else. There's no sign of anything wrong with either drive, so I'm not sure what happened exactly. I'm not the only one who has access to that computer, so it is possible there is something else done to it that I don't know of. There's quite a lot of data on it still, and if at all possible, I'd prefer to not send it to Ontrack. Does anyone know of software that would restore the partitions, keeping in mind that it's a Windows LDM partition? I have access to a variety of Operating Systems, so something that would work on Mac, Windows or Linux would be acceptable. The programs I usually use are not compatible with LDM.

    Read the article

  • Linux Server partitioning

    - by user1717735
    There's a lot of infos about this out there, but there's also a lot of contradictory infos… That's why i need some advices about it. So far, on the servers i had home for test (or even "home production") purposes i didn't really care about partitioning and i configured all in / + a swap partition, over RAID 0. Nevertheless, this pattern can't apply to production servers. I have found a good starting point here, but also it depends on what the servers will be used for… So basically, i have a server on which there will be apache, php, mysql. It will have to handle file uploads (up to 2GB) and has 2*2TB hard drive. I plan to set : / 100GB, /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB All of this if of course over RAID 1. But actually, it's not a big deal if I lose data being uploaded, so would it be interesting mounting /tmp over raid 0 while maintaining the rest over raid 1? Sounds complicated…

    Read the article

  • NSString variable out of scope in sub-class (iPhone/Obj-C)

    - by Rich
    I am following along with an example in a book with the code exactly as it is in the example code as well as from the book, and I'm getting an error at runtime. I'll try to describe the life cycle of this variable as good as I can. I have a controller class with a nested array that is populated with string literals (NSArray of NSArrays, the nested NSArrays initialized with arrayWithObjects: where the objects are all string literals - @"some string"). I access these strings with a helper method added via a category on NSArray (to pull strings out of a nested array). My controller gets a reference to this string and assigns it to a NSString property on a child controller using dot notation. The code looks like this (nestedObjectAtIndexPath is my helper method): NSString *rowKey = [rowKeys nestedObjectAtIndexPath:indexPath]; controller.keypath = rowKey; keypath is a synthesized nonatomic, retain property defined in a based class. When I hit a breakpoint in the controller at the above code, the NSString's value is as expected. When I hit the next breakpoint inside the child controller, the object id of the keypath property is the same as before, but instead of showing me the value of the NSString, XCode says that the variable is "out of scope" which is also the error I see in the console. This also happens in another sub-class of the same parent. I tried googling, and I saw slightly similar cases where people were suggesting this had to do with retain counts. I was under the impression that by using dot notation on a synthesized property, my class would be using an "auto generated accessor" that would be increasing my retain count for me so that I wouldn't have this problem. Could there be any implications because I'm accessing it in a sub-class and the prop is defined in the parent? I don't see anything in the book's errata about this, but the book is relatively new (Apress - More iPhone 3 Dev). I also have double checked that my code matches the example 100 times.

    Read the article

  • ASP.NET MVC 2 matches correct area route but generates URL to the first registered area instead.

    - by Sandor Drieënhuizen
    I'm working on a S#arpArchitecture 1.5 project, which uses ASP.NET MVC 2. I've been trying to get areas to work properly but I ran into a problem: The ASP.NET MVC 2 routing engine matches the correct route to my area but then it generates an URL that belongs to the first registered area instead. Here's my request URL: /Framework/Authentication/LogOn?ReturnUrl=%2fDefault.aspx I'm using the Route Tester from Phil Haack and it shows: Matched Route: Framework/{controller}/{action}/{id} Generated URL: /Data/Authentication/LogOn?ReturnUrl=%2FDefault.aspx using the route "Data/{controller}/{action}/{id}" That's clearly wrong, the URL should point to the Framework area, not the Data area. This is how I register my routes, nothing special there IMO. private static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } The area registration classes all look like this. Again, nothing special. public class FrameworkAreaRegistration : AreaRegistration { public override string AreaName { get { return "Framework"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Framework_default", "Framework/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } }

    Read the article

  • UIView rotation, modal view lanscape and portrait, parent fails to render

    - by Ben
    Hi everyone, I've hit a bit of a roadblock with something that I hope that someone in here can help me out with. I'll describe the 'state of play' first, and then what the issue is, so here goes; I have a series of view controllers that are chained together with a Navigation Controller (this works just fine), All of these view controllers support portrait mode only (by design), In one of the view controllers (the 'end' one actually) the user can click a table cell to pop up a modal view controller (using presentModalViewController(...) of course) This modal view controller supports portrait and landscape modes (and this works), When the user clicks the 'Done' button on this modal view controller we pop and pass control back to the parent view controller, however; If the user is in portrait mode when they click 'Done' then the parent displays itself just fine, If the user is in landscape mode when they click 'Done' then the parent displays a totally white, blank screen (that covers the whole screen). It is as if the controller does not know how to render in landscape and just doesn't bother. I'd like to be able to have this parent view render in portrait no matter what the orientation of the phone is when the user clicks the 'Done' button. Various forum posts suggest using the UIDevice method 'setOrientation' (but this is undocumented and will get our app rejected apparently). Another suggestion was to set the 'statusBarOrientation' to portrait in the 'viewWillAppear' method but that had no effect. So I am a bit stuck! Has any encountered anything like this before? If need be I can provide code, if that will help anyone diagnose the problem for me. Thanks in advance! Cheers, Ben

    Read the article

  • MVC moq unit test the object before RedirecToAction()

    - by Daoming Yang
    I want to test the data inside the "item" object before it redirect to another action. public ActionResult WebPageEdit(WebPage item, FormCollection form) { if (ModelState.IsValid) { item.Description = Utils.CrossSiteScriptingAttackCheck(item.Description); item.Content = Utils.CrossSiteScriptingAttackCheck(item.Content); item.Title = item.Title.Trim(); item.DateUpdated = DateTime.Now; // Other logic stuff here webPagesRepository.Save(item); return RedirectToAction("WebPageList"); } Here is my Test method: [Test] public void Admin_WebPageEdit_Save() { var controller = new AdminController(); controller.webPagesRepository = DataMock.WebPageDataInit(); controller.categoriesRepository = DataMock.WebPageCategoryDataInit(); FormCollection form = DataMock.CreateWebPageFormCollection(); RedirectToRouteResult actionResult = (RedirectToRouteResult)controller.WebPageEdit(webPagesRepository.Get(1), form); Assert.IsNotNull(actionResult); Assert.AreEqual("WebPageList", actionResult.RouteValues["action"]); var item = ((ViewResult)controller.WebPageEdit(webPagesRepository.Get(1), form)).ViewData.Model as WebPage; Assert.NotNull(item); Assert.AreEqual(2, item.CategoryID); } It failed at this line: var item = ((ViewResult)controller.WebPageEdit(webPagesRepository.Get(1), form)).ViewData.Model as WebPage; I am thinking about is there any ways to test the "item" object before it redirect to other actions?

    Read the article

  • UINavigationController creating a blank view out of thin air?

    - by Alex Gosselin
    Ok, this one is really weird... I can't show code for it exactly cause it follows a pretty snake-like pattern through subclasses etc, there would be a pile of it. The important parts are that I push a view controller, which during viewWillAppear pushes another view controller onto the nav controller. My nav controller is an item in a tab bar. When I press back twice, I wind up at a blank view with the same title as my root view controller, (I have no other views having this title). I even tested and put a NSLog() in viewWillAppear to make sure it was the same view appearing, but for some reason the mystery blank view is showing up instead of my view. I am able to get the original view back by pressing the button on the tab bar again. (The one that corresponds to the nav controller). This confuses me greatly, so any help would be appreciated. I will post code if somebody could narrow down what code to put. Thanks.

    Read the article

  • ASP.NET MVC 2 router matches correct area route but generates URL to the first registered area inste

    - by Sandor Drieënhuizen
    I'm working on a S#arpArchitecture 1.5 project, which uses ASP.NET MVC 2. I've been trying to get areas to work properly but I ran into a problem: The ASP.NET MVC 2 routing engine matches the correct route to my area but then it generates an URL that belongs to the first registered area instead. Here's my request URL: /Framework/Authentication/LogOn?ReturnUrl=%2fDefault.aspx I'm using the Route Tester from Phil Haack and it shows: Matched Route: Framework/{controller}/{action}/{id} Generated URL: /Data/Authentication/LogOn?ReturnUrl=%2FDefault.aspx using the route "Data/{controller}/{action}/{id}" That's clearly wrong, the URL should point to the Framework area, not the Data area. This is how I register my routes, nothing special there IMO. private static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } The area registration classes all look like this. Again, nothing special. public class FrameworkAreaRegistration : AreaRegistration { public override string AreaName { get { return "Framework"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Framework_default", "Framework/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } }

    Read the article

  • ASP.NET MVC Using Castle Windsor IoC

    - by Mad Halfling
    I have an app, modelled on the one from Apress Pro ASP.NET MVC that uses castle windsor's IoC to instantiate the controllers with their respective repositories, and this is working fine e.g. public class ItemController : Controller { private IItemsRepository itemsRepository; public ItemController(IItemsRepository windsorItemsRepository) { this.itemsRepository = windsorItemsRepository; } with using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Castle.Windsor; using Castle.Windsor.Configuration.Interpreters; using Castle.Core.Resource; using System.Reflection; using Castle.Core; namespace WebUI { public class WindsorControllerFactory : DefaultControllerFactory { WindsorContainer container; // The constructor: // 1. Sets up a new IoC container // 2. Registers all components specified in web.config // 3. Registers all controller types as components public WindsorControllerFactory() { // Instantiate a container, taking configuration from web.config container = new WindsorContainer(new XmlInterpreter(new ConfigResource("castle"))); // Also register all the controller types as transient var controllerTypes = from t in Assembly.GetExecutingAssembly().GetTypes() where typeof(IController).IsAssignableFrom(t) select t; foreach (Type t in controllerTypes) container.AddComponentWithLifestyle(t.FullName, t, LifestyleType.Transient); } // Constructs the controller instance needed to service each request protected override IController GetControllerInstance(Type controllerType) { return (IController)container.Resolve(controllerType); } } } controlling the controller creation. I sometimes need to create other repository instances within controllers, to pick up data from other places, can I do this using the CW IoC, if so then how? I have been playing around with the creation of new controller classes, as they should auto-register with my existing code (if I can get this working, I can register them properly later) but when I try to instantiate them there is an obvious objection as I can't supply a repos class for the constructor (I was pretty sure that was the wrong way to go about it anyway). Any help (especially examples) would be much appreciated. Cheers MH

    Read the article

  • asp.net mvc - How to create fake test objects quickly and efficiently

    - by Simon G
    Hi, I'm currently testing the controller in my mvc app and I'm creating a fake repository for testing. However I seem to be writing more code and spending more time for the fakes than I do on the actual repositories. Is this right? The code I have is as follows: Controller public partial class SomeController : Controller { IRepository repository; public SomeController(IRepository rep) { repository = rep; } public virtaul ActionResult Index() { // Some logic var model = repository.GetSomething(); return View(model); } } IRepository public interface IRepository { Something GetSomething(); } Fake Repository public class FakeRepository : IRepository { private List<Something> somethingList; public FakeRepository(List<Something> somethings) { somthingList = somthings; } public Something GetSomething() { return somethingList; } } Fake Data class FakeSomethingData { public static List<Something> CreateSomethingData() { var somethings = new List<Something>(); for (int i = 0; i < 100; i++) { somethings.Add(new Something { value1 = String.Format("value{0}", i), value2 = String.Format("value{0}", i), value3 = String.Format("value{0}", i) }); } return somethings; } } Actual Test [TestClass] public class SomethingControllerTest { SomethingController CreateSomethingController() { var testData = FakeSomethingData.CreateSomethingData(); var repository = new FakeSomethingRepository(testData); SomethingController controller = new SomethingController(repository); return controller; } [TestMethod] public void SomeTest() { // Arrange var controller = CreateSomethingController(); // Act // Some test here // Arrange } } All this seems to be a lot of extra code, especially as I have more than one repository. Is there a more efficient way of doing this? Maybe using mocks? Thanks

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >