Search Results

Search found 6699 results on 268 pages for 'dual layer'.

Page 137/268 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • ASP.net DAL DatasSet and Table Adapter not in namespace - Northwind Tutorial

    - by Alan
    I've been attempting to walk through the "Creating a Data Access Layer" tutorial found http://www.asp.net/learn/data-access/tutorial-01-cs.aspx I create the DB connection, create the typed dataset and table adapter, specify the sql, etc. When I add the code to the presentation layer (in this case a page called AllProducts.aspx) I am unable to find the NorthwindTableAdapters.ProductsTableAdapter class. I tried to import the NorthwindTableAdapters namespace, but it is not showing up. Looking in the solution explorer Class View confirms that there is a Northwind class, but not the namespace I'm looking for. I've tried several online tutorials that all have essentially the same steps, and I'm getting the same results. Can anyone give me a push in the right direction? I'm getting error: Namespace or type specified in the Imports 'NorthwindTableAdapters' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. I think I might need to add a reference OR they may be creating a separate class and importing it into their main project. If that's the case, the tutorials do not mention this. SuppliersTest2.aspx.vb: Imports NorthwindTableAdapters Partial Class SuppliersTest2 Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim suppliersAdapter As New SuppliersTableAdapter GridView1.DataSource = suppliersAdapter.GetAllSuppliers() GridView1.DataBind() End Sub End Class

    Read the article

  • Blind down animation on CALayer using a mask

    - by dgjones346
    Hi there, I want to create a "blind down" effect on an image so the image "blinds down" and appears. Sort of like this JavaScript transition: http://wiki.github.com/madrobby/scriptaculous/effect-blinddown The mask is setup correctly because if I manually change it's position it hides and reveals the image behind it, but it doesn't animate! It just ends up in the animates final position and you never see it actual blind. Please help! Maybe this isn't the best way to achieve a blind down effect? // create a new layer CALayer *numberLayer = [[CALayer alloc] init]; // render the number "7" on the layer UIImage *image = [UIImage imageNamed:@"number-7.png"]; numberLayer.contents = (id) [image CGImage]; numberLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height); // width and height are 50 numberLayer.position = position; // create a new mask that is 50x50 the size of the image CAShapeLayer *maskLayer = [[CAShapeLayer alloc] init]; CGMutablePathRef path = CGPathCreateMutable(); CGPathAddRect(path, nil, CGRectMake(0, 0, 50, 50)); [maskLayer setPath:path]; [maskLayer setFillColor:[[UIColor whiteColor] CGColor]]; [theLayer setMask:maskLayer]; [maskLayer setPosition:CGPointMake(0, 0)]; // place the mask over the image [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:3.0]; [maskLayer setPosition:CGPointMake(0, 50)]; // slide mask off the image // this should shift the blind away in an animation // but it doesn't animate [UIView commitAnimations]; [maskLayer release]; [boardLayer addSublayer:numberLayer]; [numberLayer release];

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Rhino Mocks, Dependency Injection, and Separation of Concerns

    - by whatispunk
    I am new to mocking and dependency injection and need some guidance. My application is using a typical N-Tier architecture where the BLL references the DAL, and the UI references the BLL but not the DAL. Pretty straight forward. Lets say, for example, I have the following classes: class MyDataAccess : IMyDataAccess {} class MyBusinessLogic {} Each exists in a separate assembly. I want to mock MyDataAccess in the tests for MyBusinessLogic. So I added a constructor to the MyBusinessLogic class to take an IMyDataAccess parameter for the dependency injection. But now when I try to create an instance of MyBusinessLogic on the UI layer it requires a reference to the DAL. I thought I could define a default constructor on MyBusinessLogic to set a default IMyDataAccess implementation, but not only does this seem like a codesmell it didn't actually solve the problem. I'd still have a public constructor with IMyDataAccess in the signature. So the UI layer still requires a reference to the DAL in order to compile. One possible solution I am toying with is to create an internal constructor for MyBusinessLogic with the IMyDataAccess parameter. Then I can use an Accessor from the test project to call the constructor. But there's still that smell. What is the common solution here. I must just be doing something wrong. How could I improve the architecture?

    Read the article

  • Set UIViewController background to transparent

    - by dbonneville
    I'm calling a UIViewController from a button on the main view. I have set the alpha of the view to 50%. When the view animates in, I can see that it's transparent. As soon as the animation stops, it becomes opaque. In the xib, when I set opacity, it's sets the opacity over white. So if I set the color to black and the opacity to 50%, this is what I see when I click the button on the main interface to show the view: view slides up from bottom at 50% opacity. I can see the underlaying view through the transparent black nicely. when view stops animating, it become gray. it appears that a white color pops in the background of the view somehow, making the 50% opacity black over white turn gray. I can no longer see the underlaying layer. What I'm trying to do: show 100% white text on a 50% transparent black layer which sits over the view that called this view. How on earth do I do this? I tried a code method that sets the background color when the view is called, but it does exactly the same thing (with another try in there commented out with the same effect): - (IBAction)gotoCreed { Creed *creed = [[Creed alloc] initWithNibName:nil bundle:nil]; [self presentModalViewController:creed animated:YES]; self.view.backgroundColor = [UIColor colorWithHue:0.0 saturation:0.0 brightness:1.0 alpha:0.2]; //self.view.backgroundColor = [UIColor clearColor]; }

    Read the article

  • Core Animation problem on iPhone

    - by Mac
    I'm new to iPhone development, and doing some experimentation with Core Animation. I've run into a small problem regarding the duration of the animation I'm attempting. Basically, Ive got a view with two subviews, and I'm trying to animate their opacity so that one fades in while the other fades out. Problem is, instead of a gradual fade in/out, the subviews simply switch instantly to/from full/zero opacity. I've tried to adjust the animation duration with CATransaction with no noticable effect. It's also not specific to animating opacity - animating position shows the same problem. The code I'm using (inside a method of the superview) follows: CALayer* oldLayer = ((UIView*) [[self subviews] objectAtIndex:0]).layer; CALayer* newLayer = ((UIView*) [[self subviews] objectAtIndex:1]).layer; [CATransaction begin]; [CATransaction setAnimationDuration:1.0f]; oldLayer.opacity = 0.0; newLayer.opacity = 1.0; [CATransaction commit]; Does anyone have an idea what the problem might be?

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • pointer being freed was not allocated. Complex malloc history help

    - by Martin KS
    I've followed the guides helpfully linked here: http://stackoverflow.com/questions/295778/iphone-debugging-pointer-being-freed-was-not-allocated-errors but the malloc_history is really throwing me for a loop, can anyone shed any light on the following: ALLOC 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ---- FREE 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ALLOC 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_start_compress | _cg_jinit_compress_master | _cg_jinit_c_prep_controller | alloc_sarray | alloc_large | malloc | malloc_zone_malloc ---- FREE 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PL AlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_abort | free_pool | free ALLOC 0x185c800-0x185ea1f [size=8736]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | -[UIImage initWithData:] | _UIImageRefFromData | CGImageSourceCreateImageAtIndex | makeImagePlus | _CGImagePluginInitJPEG | initImageJPEG | calloc | malloc_zone_calloc

    Read the article

  • handling refrence to pointers/double pointers using SWIG [C++ to Java]

    - by Siddu
    My code has an interface like class IExample { ~IExample(); //pure virtual methods ...}; a class inheriting the interface like class CExample : public IExample { protected: CExample(); //implementation of pure virtual methods ... }; and a global function to create object of this class - createExample( IExample *& obj ) { obj = new CExample(); } ; Now, I am trying to get Java API wrapper using SWIG, the SWIG generated interface has a construcotr like - IExample(long cPtr, boolean cMemoryOwn) and global function becomes createExample(IExample obj ) The problem is when i do, IExample exObject = new IExample(LogFileLibraryJNI.new_plong(), true /*or false*/ ); createExample( exObject ); The createExample(...) API at C++ layer succesfully gets called, however, when call returns to Java layer, the cPtr (long) variable does not get updated. Ideally, this variable should contain address of CExample object. I read in documentation that typemaps can be used to handle output parameters and pointer references as well; however, I am not able to figure out the suitable way to use typemaps to resolve this problem, or any other workaround. Please suggest if i am doing something wrong, or how to use typemap in such situation?

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • Linq to Entities and POCO foreign key relations mapping (1 to 0..1) problem

    - by brainnovative
    For my ASP.NET MVC 2 application I use Entity Framework 1.0 as my data access layer (repository). But I decided I want to return POCO. For the first time I have encountered a problem when I wanted to get a list of Brands with their optional logos. Here's what I did: public IQueryable<Model.Products.Brand> GetAll() { IQueryable<Model.Products.Brand> brands = from b in EntitiesCtx.Brands.Include("Logo") select new Model.Products.Brand() { BrandId = b.BrandId, Name = b.Name, Description = b.Description, IsActive = b.IsActive, Logo = /*b.Logo != null ? */new Model.Cms.Image() { ImageId = b.Logo.ImageId, Alt = b.Logo.Alt, Url = b.Logo.Url }/* : null*/ }; return brands; } You can see in the comments what I would like to achieve. It worked fine whenever a Brand had a Logo otherwise it through an exception that you can assign null to the non-nullable type int (for Id). My workaround was to use nullable in the POCO class but that's not natural - then I have to check not only if Logo is null in my Service layer or Controllers and Views but mostly for Logo.ImageId.HasValue. It's not justified to have a non null Logo property if the id is null. Can anyone think of a better solution?

    Read the article

  • How do I keep a CALayer, sublayer of a CATiledLayer, from changing it's scale after a zoom ?

    - by David
    I have a CATiledLayer that is used to display a PDF page (this CATiledLayer is the layer type of my UIView which is a subview of a UIScrollView). I want to add overlay markers on this page. So I add a sublayer to my CATiledLayer. This sublayer again hosts the different marker's layers and acts as a grouping layer. So graphically, I have: (keep in mind that I have multiple markers which are CALayers also, this is ascii art after all) pdf page (CATiledLayer) ---------------------- | CALayer | | +---------+ | | | +----+ | | | | |mker| | | | | +----+ | | | +---------+ | | | ---------------------- I have set up the canonical drawLayer:inContext: in my view for drawing the pdf. When I zoom to have more detail, the pdf gets rendered correctly, but the markers get scaled. No matter what I do to the bounds of the CALayer, my markers always become bigger and appear jagged. I would like to have the markers always the same size, as when they were initialized and first shown when the view was drawn. Is this possible ? or am I using a wrong approach ? Should I do special drawing for my contained CALayer in the drawLAyer:inContext: message ? As you see, there are things that I am missing to resolve my problem. Thank you for any help you provide.

    Read the article

  • Please suggest me the ( Interaction model of view model) MVVM design in the simple scenario discusse

    - by Jack
    Data Layer I have an Order class as an entity. This Order entity is my model object. Order can be different types, let it be A B C D Also Order class may have common properties like Name, Time of creation, etc. Also based on the order type there are different fields that are not common. View Layer The view contains the following Main Menu ListView The Main Menu contains the drop down menu button which is used to create the order based on the type selected from the drop down. The drop down contains the Order types ( A ,B , C and D). There are different user control based on the order type. Like for example if user chooses to create an order of type A then different view with different inputs field is popped up. Hence, there are four user control for each order type. If user selects A option from the drop down then Order of type A is created and vica versa. Now below is the List View that contains the List of orders so far created by the user. To Edit any particular order user may double click the list view row. Based on the order type clicked by the user in the listview, the view of that order type opens in edit mode. For example if user selects an order type A from the list view then view for order type A open in edit mode. Please suggest me interaction model for view model's in the scenario discussed above. Please excume me if the query is very basic, since I am new new to MVVM and WPF ,

    Read the article

  • Implementation review for a MVC.NET app with custom membership

    - by mrjoltcola
    I'd like to hear if anyone sees any problems with how I implemented the security in this Oracle based MVC.NET app, either security issues, concurrency issues or scalability issues. First, I implemented a CustomOracleMembershipProvider to handle the database interface to the membership store. I implemented a custom Principal named User which implements IPrincipal, and it has a hashtable of Roles. I also created a separate class named AuthCache which has a simple cache for User objects. Its purpose is simple to avoid return trips to the database, while decoupling the caching from either the web layer or the data layer. (So I can share the cache between MVC.NET, WCF, etc.) The MVC.NET stock MembershipService uses the CustomOracleMembershipProvider (configured in web.config), and both MembershipService and FormsService share access to the singleton AuthCache. My AccountController.LogOn() method: 1) Validates the user via the MembershipService.Validate() method, also loads the roles into the User.Roles container and then caches the User in AuthCache. 2) Signs the user into the Web context via FormsService.SignIn() which accesses the AuthCache (not the database) to get the User, sets HttpContext.Current.User to the cached User Principal. In global.asax.cs, Application_AuthenticateRequest() is implemented. It decrypts the FormsAuthenticationTicket, accesses the AuthCache by the ticket.Name (Username) and sets the Principal by setting Context.User = user from the AuthCache. So in short, all these classes share the AuthCache, and I have, for thread synchronization, a lock() in the cache store method. No lock in the read method. The custom membership provider doesn't know about the cache, the MembershipService doesn't know about any HttpContext (so could be used outside of a web app), and the FormsService doesn't use any custom methods besides accessing the AuthCache to set the Context.User for the initial login, so it isn't dependent on a specific membership provider. The main thing I see now is that the AuthCache will be sharing a User object if a user logs in from multiple sessions. So I may have to change the key from just UserId to something else (maybe using something in the FormsAuthenticationTicket for the key?).

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • Using a CALayer in UITableViewCell

    - by Brian
    On each cell of my table view I want to have a bar that the user can slide out from the right. On the bar I plan to have icons. So, to do this I subclassed UITableViewCell. On the cell I have implemented drawRect and in there I have already drawn a gradient and background color on the cell. From there, I can create a CALayer, give it a frame & background color, and add it as a sub layer to the Views subLayers array. I can do all that and it will display my layer on each UITableViewCell. I have added touch events to the cell so I can detect when the user touches the cell and for testing I have made it so when the user swipes, my CALayer gets wider. But the issue is, when the UITableView scrolls and reuses a cell whose CALayer has been widened, it doesn't recreate the CALayer. I have tried [myLayer setNeedsDisplay] and used the drawLayer:inContext method of its delegate and it doesn't get called. I have also tried telling call setNeedsDisplay on the cell in my UITableViewController hoping that that will cause a redraw, but it doesn't work. I'm not sure what I'm missing. I am new to CoreGraphics & CoreAnimation. I have read through the CoreAnimation Developers Guide, but I'm assuming I missing something. Any help would be great.

    Read the article

  • Need help with animation on iPhone

    - by Arun Ravindran
    I'm working on an animated clock application for the iPhone, and I have to pivot all the 3 nodes in the view, which I have obtained in the following code: [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions]; clockarm.layer.anchorPoint = CGPointMake(0.0, 0.0); [CATransaction commit]; [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanFalse forKey:kCATransactionDisableActions]; [CATransaction setValue:[NSNumber numberWithFloat:50.0] forKey:kCATransactionAnimationDuration]; CABasicAnimation *animation; animation = [CABasicAnimation animationWithKeyPath:@"transform.rotation.z"]; animation.fromValue = [NSNumber numberWithFloat:-60.0]; animation.toValue = [NSNumber numberWithFloat:2 * M_PI]; animation.timingFunction = [CAMediaTimingFunction functionWithName: kCAMediaTimingFunctionLinear]; animation.delegate = self; [clockarm.layer addAnimation:animation forKey:@"rotationAnimation"]; animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [CATransaction commit]; The problem it's just rotating once, ie. only 360 degree and then stopping. I want to raotate the needles indefinitely. How would I do that?

    Read the article

  • How to Anti-Alias Layers in iPhoneOS

    - by Shannon A.
    We've had Reiner Knizia's Money out for a couple of months now. It's done pretty well, and so we've been updating it as time allows. However, one thing continues to bug me. I've never been able to get my layered cards to anti-alias correctly. Here's a sample: Cards that are laid straight are very clean, but whenever they're angled the black lines around the cards get jagged. I've tried this depending on both lines implicit to the artwork and lines drawn through drawRect:, and they both do the same thing. I've tried the edgeAntiAliasingMask and it doesn't do a thing as far as I can tell. I've tried masksToBounds for the sublayers set to NO and YES. Right now my card is set up as a CALayer that has sub-CALayers for the front and the back, plus for a few other things like a lightening mask and a darkening mask. Here's some snippets of the code: CArdLayer *theCardLayer = [CArdLayer layer]; theCardLayer.edgeAntialiasingMask = kCALayerLeftEdge | kCALayerRightEdge | kCALayerBottomEdge | kCALayerTopEdge; theCardLayer.front = [CALayer layer]; theCardLayer.front.edgeAntialiasingMask = kCALayerLeftEdge | kCALayerRightEdge | kCALayerBottomEdge | kCALayerTopEdge; theCardLayer.front.bounds = theCardLayer.bounds; theCardLayer.front.masksToBounds = YES; theCardLayer.front.contents = (id)[cardDrawing CGImage]; [theCardLayer addSublayer:theCardLayer.front]; Etc ... Any ideas on how to make the cards actually anti-alias?

    Read the article

  • Finding distance to the closest point in a point cloud on an uniform grid

    - by erik
    I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below? Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points. Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d) Since there will be a great number of grid points and points to search from, a simple exhaustive: for each grid point: for each point: if distance between points < minDistance: minDistance = distance between points is not a good alternative. I was thinking of doing something along the lines of: create a container of size A*B*C where each element holds a container of points for each point: define indexX = round((point position x - grid min position x)/d) // same for y and z add the point to the correct index of the container for each grid point: search the container of that grid point and find the closest point if no points in container and D > 0.5d: search the 26 container indices nearest to the grid point for a closest point .. continue with next layer until a point is found or the distance to that layer is greater than D Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • Validation in n-tier asp.net mvc applications

    - by sTodorov
    Dear Stack Overflow gurus, I am looking for some practical/theoretical information regarding best practices for validation in asp.net mvc n-tier applications. I am working on a .Net application divided into the following layers: UI - Mvc3 BLL layer - all business rules. Decoupled from data access and UI layers through interfaces DAL layer - Data access with the repository pattern, EF4 and pocos Now, I am looking for a nice, clean and transparent way to specify my validation rules. Here are some thoughts on the matter so far: UI validation should only be responsible for user input and its validity. BLL validation should be handling the validity of the data regarding the application business rules. My main concern is how to bind the BLL and UI validation in the most efficient way. One think I am would like to avoid is having the UI check in a collection of validation and adding manually errors to the ModelState. Furthermore, I do not want to pass the ModelState to the BLL to be populated in there. I will appreciate any thoughts on the matter. P.S. Should this question be marked as a discussion ?

    Read the article

  • Cannot show scene with Cocos2D when using UITabBarController

    - by gw1921
    Hi I'm new to Cocos2D but am having serious issues when trying to load a cocos scene in one of the UIViewControllers mixed with other normal UIKit UIViewControllers. My project uses a UITabBarController to manage four view controllers. Three are normal UIKit view controllers while one of them I want to use cocos2D for (to draw some sprites and animate them). The following is what I've done so far. In the applicationDidFinishLaunching method, I initialize cocos2D to use the window and take the first tabbarcontroller view: - (void)applicationDidFinishLaunching:(UIApplication *)application { if( ! [CCDirector setDirectorType:CCDirectorTypeDisplayLink] ) { [CCDirector setDirectorType:CCDirectorTypeDefault]; } [[CCDirector sharedDirector] setPixelFormat:kPixelFormatRGBA8888]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888]; [[CCDirector sharedDirector] setDisplayFPS:YES]; [[CCDirector sharedDirector] attachInView: window]; [[[CCDirector sharedDirector] openGLView] addSubview: tabBarController.view]; [window makeKeyAndVisible]; } Then in the third UIViewController code (where I want to use cocos2D) I do this (note: the view is being loaded from a NIB file which has a blank UIView and nothing else):\ - (void)viewDidLoad { [super viewDidLoad]; // 'scene' is an autorelease object. CCScene *myScene = [CCScene node]; // 'layer' is an autorelease object. MyLayer *myLayer = [MyLayer node]; // add layer as a child to scene [myScene addChild: myLayer]; [[CCDirector sharedDirector] runWithScene: myScene]; } However all I see is a blank white view and nothing else. If I call the following in viewdidLoad: [[CCDirector sharedDirector] attachInView: self.view]; The app crashes complaining that CCDirector is already attached. Please help!

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >