Search Results

Search found 3387 results on 136 pages for 'n layer'.

Page 53/136 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How do I keep a CALayer, sublayer of a CATiledLayer, from changing it's scale after a zoom ?

    - by David
    I have a CATiledLayer that is used to display a PDF page (this CATiledLayer is the layer type of my UIView which is a subview of a UIScrollView). I want to add overlay markers on this page. So I add a sublayer to my CATiledLayer. This sublayer again hosts the different marker's layers and acts as a grouping layer. So graphically, I have: (keep in mind that I have multiple markers which are CALayers also, this is ascii art after all) pdf page (CATiledLayer) ---------------------- | CALayer | | +---------+ | | | +----+ | | | | |mker| | | | | +----+ | | | +---------+ | | | ---------------------- I have set up the canonical drawLayer:inContext: in my view for drawing the pdf. When I zoom to have more detail, the pdf gets rendered correctly, but the markers get scaled. No matter what I do to the bounds of the CALayer, my markers always become bigger and appear jagged. I would like to have the markers always the same size, as when they were initialized and first shown when the view was drawn. Is this possible ? or am I using a wrong approach ? Should I do special drawing for my contained CALayer in the drawLAyer:inContext: message ? As you see, there are things that I am missing to resolve my problem. Thank you for any help you provide.

    Read the article

  • Please suggest me the ( Interaction model of view model) MVVM design in the simple scenario discusse

    - by Jack
    Data Layer I have an Order class as an entity. This Order entity is my model object. Order can be different types, let it be A B C D Also Order class may have common properties like Name, Time of creation, etc. Also based on the order type there are different fields that are not common. View Layer The view contains the following Main Menu ListView The Main Menu contains the drop down menu button which is used to create the order based on the type selected from the drop down. The drop down contains the Order types ( A ,B , C and D). There are different user control based on the order type. Like for example if user chooses to create an order of type A then different view with different inputs field is popped up. Hence, there are four user control for each order type. If user selects A option from the drop down then Order of type A is created and vica versa. Now below is the List View that contains the List of orders so far created by the user. To Edit any particular order user may double click the list view row. Based on the order type clicked by the user in the listview, the view of that order type opens in edit mode. For example if user selects an order type A from the list view then view for order type A open in edit mode. Please suggest me interaction model for view model's in the scenario discussed above. Please excume me if the query is very basic, since I am new new to MVVM and WPF ,

    Read the article

  • Implementation review for a MVC.NET app with custom membership

    - by mrjoltcola
    I'd like to hear if anyone sees any problems with how I implemented the security in this Oracle based MVC.NET app, either security issues, concurrency issues or scalability issues. First, I implemented a CustomOracleMembershipProvider to handle the database interface to the membership store. I implemented a custom Principal named User which implements IPrincipal, and it has a hashtable of Roles. I also created a separate class named AuthCache which has a simple cache for User objects. Its purpose is simple to avoid return trips to the database, while decoupling the caching from either the web layer or the data layer. (So I can share the cache between MVC.NET, WCF, etc.) The MVC.NET stock MembershipService uses the CustomOracleMembershipProvider (configured in web.config), and both MembershipService and FormsService share access to the singleton AuthCache. My AccountController.LogOn() method: 1) Validates the user via the MembershipService.Validate() method, also loads the roles into the User.Roles container and then caches the User in AuthCache. 2) Signs the user into the Web context via FormsService.SignIn() which accesses the AuthCache (not the database) to get the User, sets HttpContext.Current.User to the cached User Principal. In global.asax.cs, Application_AuthenticateRequest() is implemented. It decrypts the FormsAuthenticationTicket, accesses the AuthCache by the ticket.Name (Username) and sets the Principal by setting Context.User = user from the AuthCache. So in short, all these classes share the AuthCache, and I have, for thread synchronization, a lock() in the cache store method. No lock in the read method. The custom membership provider doesn't know about the cache, the MembershipService doesn't know about any HttpContext (so could be used outside of a web app), and the FormsService doesn't use any custom methods besides accessing the AuthCache to set the Context.User for the initial login, so it isn't dependent on a specific membership provider. The main thing I see now is that the AuthCache will be sharing a User object if a user logs in from multiple sessions. So I may have to change the key from just UserId to something else (maybe using something in the FormsAuthenticationTicket for the key?).

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • How to Anti-Alias Layers in iPhoneOS

    - by Shannon A.
    We've had Reiner Knizia's Money out for a couple of months now. It's done pretty well, and so we've been updating it as time allows. However, one thing continues to bug me. I've never been able to get my layered cards to anti-alias correctly. Here's a sample: Cards that are laid straight are very clean, but whenever they're angled the black lines around the cards get jagged. I've tried this depending on both lines implicit to the artwork and lines drawn through drawRect:, and they both do the same thing. I've tried the edgeAntiAliasingMask and it doesn't do a thing as far as I can tell. I've tried masksToBounds for the sublayers set to NO and YES. Right now my card is set up as a CALayer that has sub-CALayers for the front and the back, plus for a few other things like a lightening mask and a darkening mask. Here's some snippets of the code: CArdLayer *theCardLayer = [CArdLayer layer]; theCardLayer.edgeAntialiasingMask = kCALayerLeftEdge | kCALayerRightEdge | kCALayerBottomEdge | kCALayerTopEdge; theCardLayer.front = [CALayer layer]; theCardLayer.front.edgeAntialiasingMask = kCALayerLeftEdge | kCALayerRightEdge | kCALayerBottomEdge | kCALayerTopEdge; theCardLayer.front.bounds = theCardLayer.bounds; theCardLayer.front.masksToBounds = YES; theCardLayer.front.contents = (id)[cardDrawing CGImage]; [theCardLayer addSublayer:theCardLayer.front]; Etc ... Any ideas on how to make the cards actually anti-alias?

    Read the article

  • Need help with animation on iPhone

    - by Arun Ravindran
    I'm working on an animated clock application for the iPhone, and I have to pivot all the 3 nodes in the view, which I have obtained in the following code: [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions]; clockarm.layer.anchorPoint = CGPointMake(0.0, 0.0); [CATransaction commit]; [CATransaction begin]; [CATransaction setValue:(id)kCFBooleanFalse forKey:kCATransactionDisableActions]; [CATransaction setValue:[NSNumber numberWithFloat:50.0] forKey:kCATransactionAnimationDuration]; CABasicAnimation *animation; animation = [CABasicAnimation animationWithKeyPath:@"transform.rotation.z"]; animation.fromValue = [NSNumber numberWithFloat:-60.0]; animation.toValue = [NSNumber numberWithFloat:2 * M_PI]; animation.timingFunction = [CAMediaTimingFunction functionWithName: kCAMediaTimingFunctionLinear]; animation.delegate = self; [clockarm.layer addAnimation:animation forKey:@"rotationAnimation"]; animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [CATransaction commit]; The problem it's just rotating once, ie. only 360 degree and then stopping. I want to raotate the needles indefinitely. How would I do that?

    Read the article

  • Using a CALayer in UITableViewCell

    - by Brian
    On each cell of my table view I want to have a bar that the user can slide out from the right. On the bar I plan to have icons. So, to do this I subclassed UITableViewCell. On the cell I have implemented drawRect and in there I have already drawn a gradient and background color on the cell. From there, I can create a CALayer, give it a frame & background color, and add it as a sub layer to the Views subLayers array. I can do all that and it will display my layer on each UITableViewCell. I have added touch events to the cell so I can detect when the user touches the cell and for testing I have made it so when the user swipes, my CALayer gets wider. But the issue is, when the UITableView scrolls and reuses a cell whose CALayer has been widened, it doesn't recreate the CALayer. I have tried [myLayer setNeedsDisplay] and used the drawLayer:inContext method of its delegate and it doesn't get called. I have also tried telling call setNeedsDisplay on the cell in my UITableViewController hoping that that will cause a redraw, but it doesn't work. I'm not sure what I'm missing. I am new to CoreGraphics & CoreAnimation. I have read through the CoreAnimation Developers Guide, but I'm assuming I missing something. Any help would be great.

    Read the article

  • Finding distance to the closest point in a point cloud on an uniform grid

    - by erik
    I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below? Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points. Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d) Since there will be a great number of grid points and points to search from, a simple exhaustive: for each grid point: for each point: if distance between points < minDistance: minDistance = distance between points is not a good alternative. I was thinking of doing something along the lines of: create a container of size A*B*C where each element holds a container of points for each point: define indexX = round((point position x - grid min position x)/d) // same for y and z add the point to the correct index of the container for each grid point: search the container of that grid point and find the closest point if no points in container and D > 0.5d: search the 26 container indices nearest to the grid point for a closest point .. continue with next layer until a point is found or the distance to that layer is greater than D Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • Cannot show scene with Cocos2D when using UITabBarController

    - by gw1921
    Hi I'm new to Cocos2D but am having serious issues when trying to load a cocos scene in one of the UIViewControllers mixed with other normal UIKit UIViewControllers. My project uses a UITabBarController to manage four view controllers. Three are normal UIKit view controllers while one of them I want to use cocos2D for (to draw some sprites and animate them). The following is what I've done so far. In the applicationDidFinishLaunching method, I initialize cocos2D to use the window and take the first tabbarcontroller view: - (void)applicationDidFinishLaunching:(UIApplication *)application { if( ! [CCDirector setDirectorType:CCDirectorTypeDisplayLink] ) { [CCDirector setDirectorType:CCDirectorTypeDefault]; } [[CCDirector sharedDirector] setPixelFormat:kPixelFormatRGBA8888]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888]; [[CCDirector sharedDirector] setDisplayFPS:YES]; [[CCDirector sharedDirector] attachInView: window]; [[[CCDirector sharedDirector] openGLView] addSubview: tabBarController.view]; [window makeKeyAndVisible]; } Then in the third UIViewController code (where I want to use cocos2D) I do this (note: the view is being loaded from a NIB file which has a blank UIView and nothing else):\ - (void)viewDidLoad { [super viewDidLoad]; // 'scene' is an autorelease object. CCScene *myScene = [CCScene node]; // 'layer' is an autorelease object. MyLayer *myLayer = [MyLayer node]; // add layer as a child to scene [myScene addChild: myLayer]; [[CCDirector sharedDirector] runWithScene: myScene]; } However all I see is a blank white view and nothing else. If I call the following in viewdidLoad: [[CCDirector sharedDirector] attachInView: self.view]; The app crashes complaining that CCDirector is already attached. Please help!

    Read the article

  • Validation in n-tier asp.net mvc applications

    - by sTodorov
    Dear Stack Overflow gurus, I am looking for some practical/theoretical information regarding best practices for validation in asp.net mvc n-tier applications. I am working on a .Net application divided into the following layers: UI - Mvc3 BLL layer - all business rules. Decoupled from data access and UI layers through interfaces DAL layer - Data access with the repository pattern, EF4 and pocos Now, I am looking for a nice, clean and transparent way to specify my validation rules. Here are some thoughts on the matter so far: UI validation should only be responsible for user input and its validity. BLL validation should be handling the validity of the data regarding the application business rules. My main concern is how to bind the BLL and UI validation in the most efficient way. One think I am would like to avoid is having the UI check in a collection of validation and adding manually errors to the ModelState. Furthermore, I do not want to pass the ModelState to the BLL to be populated in there. I will appreciate any thoughts on the matter. P.S. Should this question be marked as a discussion ?

    Read the article

  • How to animate the drawing of a CGPath?

    - by Jordan Kay
    I am wondering if there is a way to do this using Core Animation. Specifically, I am adding a sub-layer to a layer-backed custom NSView and setting its delegate to another custom NSView. That class's drawInRect method draws a single CGPath: - (void)drawInRect:(CGRect)rect inContext:(CGContextRef)context { CGContextSaveGState(context); CGContextSetLineWidth(context, 12); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 0, 0); CGPathAddLineToPoint(path, NULL, rect.size.width, rect.size.height); CGContextBeginPath(context); CGContextAddPath(context, path); CGContextStrokePath(context); CGContextRestoreGState(context); } My desired effect would be to animate the drawing of this line. That is, I'd like for the line to actually "stretch" in an animated way. It seems like there would be a simple way to do this using Core Animation, but I haven't been able to come across any. Do you have any suggestions as to how I could accomplish this goal?

    Read the article

  • Problem with Unit testing of ASP.NET project (NullReferenceException when running the test)

    - by Alex
    Hi, I'm trying to create a bunch of MS visual studio unit tests for my n-tiered web app but for some reason I can't run those tests and I get the following error - "Object reference not set to an instance of an object" What I'm trying to do is testing of my data access layer where I use LINQ data context class to execute a certain function and return a result,however during the debugging process I found out that all the tests fail as soon as they get to the LINQ data context class and it has something to do with the connection string but I cant figure out what is the problem. The debugging of tests fails here(the second line): public EICDataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["EICDatabaseConnectionString"].ConnectionString, mappingSource) { OnCreated(); } And my test is as follows: TestMethod()] public void OnGetCustomerIDTest() { FrontLineStaffDataAccess target = new FrontLineStaffDataAccess(); // TODO: Initialize to an appropriate value string regNo = "jonh"; // TODO: Initialize to an appropriate value int expected = 10; // TODO: Initialize to an appropriate value int actual; actual = target.OnGetCustomerID(regNo); Assert.AreEqual(expected, actual); } The method which I call from DAL is: public int OnGetCustomerID(string regNo) { using (LINQDataAccess.EICDataClassesDataContext dataContext = new LINQDataAccess.EICDataClassesDataContext()) { IEnumerable<LINQDataAccess.GetCustomerIDResult> sProcCustomerIDResult = dataContext.GetCustomerID(regNo); int customerID = sProcCustomerIDResult.First().CustomerID; return customerID; } } So basically everything fails after it reaches the 1st line of DA layer method and when it tries to instantiate the LINQ data access class... I've spent around 10 hours trying to troubleshoot the problem but no result...I would really appreciate any help! UPDATE: Finally I've fixed this!!!!:) I dont know why but for some reasons in the app.config file the connection to my database was as follows: AttachDbFilename=|DataDirectory|\EICDatabase.MDF So what I did is I just changed the path and instead of |DataDirectory| I put the actual path where my MDF file sits,i.e C:\Users\1\Documents\Visual Studio 2008\Projects\EICWebSystem\EICWebSystem\App_Data\EICDatabase.mdf After I had done that it worked out!But still it's a bit not clear what was the problem...probably incorrect path to the database?My web.config of ASP.NET project contains the |DataDirectory|\EICDatabase.MDF path though..

    Read the article

  • Javascript working in chrome but not in explorer

    - by Greg
    Hello, I am writing this code in html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script language="javascript" type="text/javascript"> function setVisibility(id, visibility) { document.getElementById(id).style.display = visibility; } </script> <title>Welcome to the memory game</title> </head> <body> <h1>Welcome to the memory game!</h1> <input type="button" name="type" value='Show Layer' onclick="setVisibility('sub3', 'inline');"/> <input type="button" name="type" value='Hide Layer' onclick="setVisibility('sub3', 'none');"/> <div id="sub3">Message Box</div> </body> </html> It suppose to make the "div" disappear and reapper, but it works in chrome and not in explorer. Anyone has any idea how can I make it work in explorer (I tried allowing blocked content when that message about activeX appears in explorer)? Thanks, Greg

    Read the article

  • Ninject InThreadScope Binding

    - by e36M3
    I have a Windows service that contains a file watcher that raises events when a file arrives. When an event is raised I will be using Ninject to create business layer objects that inside of them have a reference to an Entity Framework context which is also injected via Ninject. In my web applications I always used InRequestScope for the context, that way within one request all business layer objects work with the same Entity Framework context. In my current Windows service scenario, would it be sufficient to switch the Entity Framework context binding to a InThreadScope binding? In theory when an event handler in the service triggers it's executed under some thread, then if another file arrives simultaneously it will be executing under a different thread. Therefore both events will not be sharing an Entity Framework context, in essence just like two different http requests on the web. One thing that bothers me is the destruction of these thread scoped objects, when you look at the Ninject wiki: .InThreadScope() - One instance of the type will be created per thread. .InRequestScope() - One instance of the type will be created per web request, and will be destroyed when the request ends. Based on this I understand that InRequestScope objects will be destroyed (garbage collected?) when (or at some point after) the request ends. This says nothing however on how InThreadScope objects are destroyed. To get back to my example, when the file watcher event handler method is completed, the thread goes away (back to the thread pool?) what happens to the InThreadScope-d objects that were injected? EDIT: One thing is clear now, that when using InThreadScope() it will not destroy your object when the handler for the filewatcher exits. I was able to reproduce this by dropping many files in the folder and eventually I got the same thread id which resulted in the same exact Entity Framework context as before, so it's definitely not sufficient for my applications. In this case a file that came in 5 minutes later could be using a stale context that was assigned to the same thread before.

    Read the article

  • New CATransform3DMakeRotation deletes old transformation?!

    - by david
    I added a CATransform3DMakeRotation to a layer. When I add another one it deletes the old one? The first one: [UIView beginAnimations:@"rotaty" context:nil]; [UIView setAnimationDuration:0.5]; [UIView setAnimationDelegate:self]; CGAffineTransform transform = CGAffineTransformMakeRotation(-3.14); kuvert.transform = CGAffineTransformRotate(transform, DegreesToRadians(134)); kuvert.center = CGPointMake(kuvert.center.x-70, kuvert.center.y+100); [UIView commitAnimations]; and the second one: CABasicAnimation *topAnim = [CABasicAnimation animationWithKeyPath:@"transform"]; topAnim.duration=1; topAnim.repeatCount=0; topAnim.fromValue = [NSValue valueWithCATransform3D:CATransform3DMakeRotation(0.0, 0, 0, 0)]; float f = DegreesToRadians(180); // -M_PI/1; topAnim.toValue=[NSValue valueWithCATransform3D:CATransform3DMakeRotation(f, 0,1, 0)]; topAnim.delegate = self; topAnim.removedOnCompletion = NO; topAnim.fillMode = kCAFillModeBoth; [topAnim setValue:@"flippy" forKey:@"AnimationName"]; [[KuvertLasche layer] addAnimation:topAnim forKey:@"flippy"]; The second one resets the view and applies itself after that. How do I fix this??

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Forwarding first responder to UIScrollView unsuccessful

    - by Winston
    Hello, I am trying to create an image cropping bpx whereby the underlying image can be zoomed an scrolled while the cropping "maskview" layer stays the same and the corners can be dragged in any direction. To achieve this, the ImageView is added to a UIScrollView, and this is added to the SelectionViewController.view . I then add the cropping "maskview" layer to the SelectionViewController.view. I did this instead of adding to the scrollview so that the cropping maskview won't be expanded when the underlying image expands. @interface SelectionViewController : UIViewController <UIScrollViewDelegate> { UIImageView *photoview; // Added to self.scrollview MaskedView *maskedview; // Added to self.view UIScrollView *scrollview; // Added to self.view } The maskview has first responder status and responds accordingly. However, the only event I am interested in the maskview is if any of the corners are dragged (this works fine). The problem I want to try to solve is two-fold: If it is a pinch or zoom gesture, I am trying to forward the responder to the UIScrollview using: [self.nextResponder touchesMoved:touches withEvent:event]; The scrollview does not seem to respond. I have confirmed that the self.nextResponder after the maskedview is the SelectionViewController's view itself. From there, it doesn't seem like the UIScrollview responds. It does respond fine if I remove the cropping maskedview. If I zoom in on the underlying image and the cropping maskview rectangle is still the same, I want to now crop the actual zoomed image. Let's say for the upper left hand corner of the crop box, how can I find what the coordinates translates to in the underlying UIImageView now that the image has been zoomed? Any insight into either of these questions would be appreciated. Thank you, Winston

    Read the article

  • using Autofac in a multi-layered architecture

    - by Kamyar
    I'm fairly new to the DI/IoC concept and would like to use Autofac in a 3-layered ASP.NET Webforms application. UI layer: An ASP.NET webforms website. BLL: Business logic layer which calls the repositories on DAL. DAL: .EDMX file (Entity Model) and ObjectContext with Repository classes which abstract the CRUD operations for each entity. Entities: The POCO Entities. Persistence Ignorant. Generated by Microsoft's ADO.Net POCO Entity Generator. I have asked a more general question here. Basically, I'd like to create an obejctcontext per HttpContext in my DAL. But i don't want to add a reference to DAL in UI or access to HttpContext in DAL directly. I guess this is where IoC tools come to play. The answer to my previous question is a very good example of using Windsor Castle. I'd like to use Autofac as my IoC tool and Don't know how to achieve this. (How to access DAL in application_start to register the component while I don't want to reference it in my UI, what are the proper references to be able to use DAL component in BLL with Autofac, Should I register BLL as a component with Autofac too) Sorry folks for not providing an explicit question and requesting a kind of working example, But I'm very unfamiliar to the whole IoC concept and I don't think I can achieve it to use in my current time-limited project.

    Read the article

  • How to simulate inner exception in C++

    - by Siva Chandran
    Basically I want to simulate .NET Exception.InnerException in C++. I want to catch exception from bottom layer and wrap it with another exception and throw again to upper layer. The problem here is I don't know how to wrap the catched exception inside another exception. struct base_exception : public std::exception { std::exception& InnerException; base_exception() : InnerException(???) { } // <---- what to initialize with base_exception(std::exception& innerException) : InnerException(innerException) { } }; struct func1_exception : public base_exception { const char* what() const throw() { return "func1 exception"; } }; struct func2_exception : public base_exception { const char* what() const throw() { return "func2 exception"; } }; void func2() { throw func2_exception(); } void func1() { try { func2(); } catch(std::exception& e) { throw func2_exception(e); // <--- is this correct? will the temporary object will be alive? } } int main(void) { try { func1(); } catch(base_exception& e) { std::cout << "Got exception" << std::endl; std::cout << e.what(); std::cout << "InnerException" << std::endl; std::cout << e.InnerException.what(); // <---- how to make sure it has inner exception ? } } In the above code listing I am not sure how to initialize the "InnerException" member when there is no inner exception. Also I am not sure whether the temporary object that is thrown from func1 will survive even after func2 throw?

    Read the article

  • C# Delegate under the hood question.

    - by Ted
    Hi Guys I was doing some digging around into delegate variance after reading the following tquestion in SO. "delegate-createdelegate-and-generics-error-binding-to-target-method" (sorry not allowed to post more than one hyperlink as a newbie here!) I found a very nice bit of code from Barry kelly at https://www.blogger.com/comment.g?blogID=8184237816669520763&postID=2109708553230166434 Here it is (in a sugared-up form :-) using System; namespace ConsoleApplication4 { internal class Base { } internal class Derived : Base { } internal delegate void baseClassDelegate(Base b); internal delegate void derivedClassDelegate(Derived d); internal class App { private static void Foo1(Base b) { Console.WriteLine("Foo 1"); } private static void Foo2(Derived b) { Console.WriteLine("Foo 2"); } private static T CastDelegate<T>(Delegate src) where T : class { return (T) (object) Delegate.CreateDelegate( typeof (T), src.Target, src.Method, true); // throw on fail } private static void Main() { baseClassDelegate a = Foo1; // works fine derivedClassDelegate b = Foo2; // works fine b = a.Invoke; // the easy way to assign delegate using variance, adds layer of indirection though b(new Derived()); b = CastDelegate<derivedClassDelegate>(a); // the hard way, avoids indirection b(new Derived()); } } } I understand all of it except this one (what looks very simple) line. b = a.Invoke; // the easy way to assign delegate using variance, adds layer of indirection though Can anyone tell me: how it is possible to call invoke without passing the param required by the static function. When is going on under the hood when you assign the return value from calling invoke What does Barry mean by extra indirection (in his comment)

    Read the article

  • Business Objects - Containers or functional?

    - by Walter
    Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object. Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)? One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object. The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object? Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense? EDIT These results are very similar to our debates. One vote to one side or another completely changes the direction. Does anyone else want to add their 2 cents? EDIT Eventhough the answer sampling is small, it appears that the majority believe that functionality in a business object is acceptable as long as it is simple but persistence is best placed in a separate class/layer. We'll give this a try. Thanks for everyone's input...

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >