Search Results

Search found 496 results on 20 pages for 'lifetime'.

Page 14/20 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Will wear induced by turning computers off in the evening be offset by energy savings?

    - by sharptooth
    I'm asking this here because this is primarily a huge office scenario and administrators will more likely have the answer I'm looking for. Employees' desktop computers can be either left turned on for the whole night or switched off in the evening and turned back on in the morning. The latter will surely save energy. In the same time turning on and off is very harmful for the equipment - hardware often breaks specifically when turned on. Both energy and hardware replacements cost money. With energy it's quite obvious - you pay every month according to what your power meter shows. With hardware replacements it's worse - you need qualified stuff to quickly diagnose the problems and once something breaks the affected employee will have to wait for some time while his computer is fixed/replaced and the data is recovered. So the company has to choose between saving money on energy and saving money on computer maintaince and lost hours. Such decisions must be well though. Is there any detailed study of how turning computers off each evening affects their lifetime?

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • CA2000 passing object reference to base constructor in C#

    - by Timothy
    I receive a warning when I run some code through Visual Studio's Code Analysis utility which I'm not sure how to resolve. Perhaps someone here has come across a similar issue, resolved it, and is willing to share their insight. I'm programming a custom-painted cell used in a DataGridView control. The code resembles: public class DataGridViewMyCustomColumn : DataGridViewColumn { public DataGridViewMyCustomColumn() : base(new DataGridViewMyCustomCell()) { } It generates the following warning: CA2000 : Microsoft.Reliability : In method 'DataGridViewMyCustomColumn.DataGridViewMyCustomColumn()' call System.IDisposable.Dispose on object 'new DataGridViewMyCustomCell()' before all references to it are out of scope. I understand it is warning me DataGridViewMyCustomCell (or a class that it inherits from) implements the IDisposable interface and the Dispose() method should be called to clean up any resources claimed by DataGridViewMyCustomCell when it is no longer. The examples I've seen on the internet suggest a using block to scope the lifetime of the object and have the system automatically dispose it, but base isn't recognized when moved into the body of the constructor so I can't write a using block around it... which I'm not sure I'd want to do anyway, since wouldn't that instruct the run time to free the object which could still be used later inside the base class? My question then, is the code okay as is? Or, how could it be refactored to resolve the warning? I don't want to suppress the warning unless it is truly appropriate to do so.

    Read the article

  • WPF - LayoutUpdated event firing repeatedly

    - by Drew Noakes
    I've been adding a bit of animation to my WPF application. Thanks to Dan Crevier's unique solution to animating the children of a panel combined with the awesome WPF Penner animations it turned out to be fairly straightforward to make one of my controls look great and have its children move about with some nice animation. Unfortunately this all comes with a performance overhead. I'm happy to have the performance hit when items are added/removed or the control is resized, but it seems that this perf hit occurs consistently throughout the application's lifetime, even when items are completely static. The PanelLayoutAnimator class uses an attached property to hook the UIElement.LayoutUpdated.aspx) event. When this event fires, render transforms are animated to cause the children to glide to their new positions. Unfortunately it seems that the LayoutUpdated event fires every second or so, even when nothing is happening in the application (at least I don't think my code's doing anything -- the app doesn't have focus and the mouse is steady.) As the reason for the event is not immediately apparent to the event handler, all children of the control have to be reevaluated. This event is being called about once a second when idle. The frequency increases when actually using the app. So my question is, how can I improve the performance here? Any answer that assists would be appreciated, but I'm currently stuck on these sub-questions: What causes the LayoutUpdated event to fire so frequently? Is this supposed to happen, and if not, how can I find out why it's firing and curtail it? Is there a more convenient way within the handler to know whether something has happened that might have moved children? If so, I could bail out early and avoid the overhead of looping each child. For now I will work around this issue by disabling animation when there are more than N children in the panel.

    Read the article

  • Single player 'pong' game

    - by Jam
    I am just starting out learning pygame and livewires, and I'm trying to make a single-player pong game, where you just hit the ball, and it bounces around until it passes your paddle (located on the left side of the screen and controlled by the mouse), which makes you lose. I have the basic code, but the ball doesn't stay on the screen, it just flickers and doesn't remain constant. Also, the paddle does not move with the mouse. I'm sure I'm missing something simple, but I just can't figure it out. Help please! Here's what I have: from livewires import games import random games.init(screen_width=640, screen_height=480, fps=50) class Paddle(games.Sprite): image=games.load_image("paddle.bmp") def __init__(self, x=10): super(Paddle, self).__init__(image=Paddle.image, y=games.mouse.y, left=10) self.score=games.Text(value=0, size=25, top=5, right=games.screen.width - 10) games.screen.add(self.score) def update(self): self.y=games.mouse.y if self.top<0: self.top=0 if self.bottom>games.screen.height: self.bottom=games.screen.height self.check_collide() def check_collide(self): for ball in self.overlapping_sprites: self.score.value+=1 ball.handle_collide() class Ball(games.Sprite): image=games.load_image("ball.bmp") speed=5 def __init__(self, x=90, y=90): super(Ball, self).__init__(image=Ball.image, x=x, y=y, dx=Ball.speed, dy=Ball.speed) def update(self): if self.right>games.screen.width: self.dx=-self.dx if self.bottom>games.screen.height or self.top<0: self.dy=-self.dy if self.left<0: self.end_game() self.destroy() def handle_collide(self): self.dx=-self.dx def end_game(self): end_message=games.Message(value="Game Over", size=90, x=games.screen.width/2, y=games.screen.height/2, lifetime=250, after_death=games.screen.quit) games.screen.add(end_message) def main(): background_image=games.load_image("background.bmp", transparent=False) games.screen.background=background_image paddle_image=games.load_image("paddle.bmp") the_paddle=games.Sprite(image=paddle_image, x=10, y=games.mouse.y) games.screen.add(the_paddle) ball_image=games.load_image("ball.bmp") the_ball=games.Sprite(image=ball_image, x=630, y=200, dx=2, dy=2) games.screen.add(the_ball) games.mouse.is_visible=False games.screen.event_grab=True games.screen.mainloop() main()

    Read the article

  • using Quick Sequence Diagram Editor for sequence diagrams

    - by Jason S
    Anyone have experience with Quick Sequence Diagram Editor? The combination of instant display + text source code + Java implementation is very attractive to me, but I can't quite figure out how to make the syntax do what I want, and the documentation's not very clear. Here's a contrived example: al:Actor bill:Actor atm:ATM[a] bank:Bank[a] al:atm.give me $10 atm:al has $3=bank.check al's account balance al:atm.what time is it atm:al.it's now atm:al.stop bugging me atm:al.you only have $3 atm:bill.and don't you open your mouth bill:atm.who asked you? bill:atm.give me $20 al:atm.hey, I'm not finished! atm:bill has $765=bank.check bill's account balance atm:yes I'm sure, bill has $765=bank.hmm are you sure? atm:bill.here's $20, now go away atm:great, he's a cool dude=bank.I just gave Bill $20 al:atm.what about my $10? atm:al.read my lips: you only have $3 Here's the result from QSDE in single-threaded mode: and in multi-threaded mode: I guess I'm not clear what starts/ends those vertical bars. I have a situation which is single-threaded, but there's state involved, and all the messages are asynchronous. I guess that means I should use an external object to represent that state and its lifetime. What I want is for one timeline to represent the message sequence al:atm.give me $10 atm:bank.check al's account balance bank:atm.al has $3 atm:al.you only have $3 and another timeline to represent the message sequence bill:atm.give me $20 atm:bank.check bill's account balance bank:atm.bill has $765 atm:bank.hmm are you sure? bank:atm.yes I'm sure, bill has $765 atm:bill.here's $20, now go away atm:bank.I just gave Bill $20 bank:atm.great, he's a cool dude with the other "wisecracks" representing other miscellaneous messages that I don't care about right now. Is there a way to do this with QSDE?

    Read the article

  • C++ smart pointers: sharing pointers vs. sharing data

    - by Eli Bendersky
    In this insightful article, one of the Qt programmers tries to explain the different kinds of smart pointers Qt implements. In the beginning, he makes a distinction between sharing data and sharing the pointers themselves: First, let’s get one thing straight: there’s a difference between sharing pointers and sharing data. When you share pointers, the value of the pointer and its lifetime is protected by the smart pointer class. In other words, the pointer is the invariant. However, the object that the pointer is pointing to is completely outside its control. We don’t know if the object is copiable or not, if it’s assignable or not. Now, sharing of data involves the smart pointer class knowing something about the data being shared. In fact, the whole point is that the data is being shared and we don’t care how. The fact that pointers are being used to share the data is irrelevant at this point. For example, you don’t really care how Qt tool classes are implicitly shared, do you? What matters to you is that they are shared (thus reducing memory consumption) and that they work as if they weren’t. Frankly, I just don't undersand this explanation. There was a clarification plea in the article comments, but I didn't find the author's explanation sufficient. If you do understand this, please explain. What is this distinction, and how are other shared pointer classes (i.e. from boost or the new C++ standards) fit into this taxonomy? Thanks in advance

    Read the article

  • Use of qsrand, random method that is not random

    - by Andy M
    Hey everyone, I'm having a strange problem here, and I can't manage to find a good explanation to it, so I thought of asking you guys : Consider the following method : int MathUtility::randomize(int Min, int Max) { qsrand(QTime::currentTime().msec()); if (Min > Max) { int Temp = Min; Min = Max; Max = Temp; } return ((rand()%(Max-Min+1))+Min); } I won't explain you gurus what this method actually does, I'll instead explain my problem : I realised that when I call this method in a loop, sometimes, I get the same random number over and over again... For example, this snippet... for(int i=0; i<10; ++i) { int Index = MathUtility::randomize(0, 1000); qDebug() << Index; } ...will produce something like : 567 567 567 567...etc... I realised too, that if I don't call qsrand everytime, but only once during my application's lifetime, it's working perfectly... My question : Why ?

    Read the article

  • On Windows, how does console window ownership work?

    - by shroudednight
    When a console application is started from another console application, how does console ownership work? I see four possibilities: The second application inherits the console from the first application for its lifetime, with the console returning to the original owner on exit. Each application has its own console. Windows then somehow merges the content of the two into what the "console" visible to the user The second application get a handle to the console that belongs to the first application. The console is placed into shared memory and both applications have equal "ownership" It's quite possible that I missed something and none of these four options adequately describe what Windows does with its consoles. If the answer is close to option 4. My follow-up question is which of the two processes is responsible for managing the window? (Handling graphical updates when the screen needs to be refreshed / redrawn, etc) A concrete example: Run CMD. Then, using CMD, run [console application]. The [console application] will write to what appears to be the same console window that CMD was using.

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • How does Subsonic handle connections?

    - by Quintin Par
    In Nhibernate you start a session by creating it during a BeginRequest and close at EndRequest public class Global: System.Web.HttpApplication { public static ISessionFactory SessionFactory = CreateSessionFactory(); protected static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "hibernate.cfg.xml")) .BuildSessionFactory(); } public static ISession CurrentSession { get{ return (ISession)HttpContext.Current.Items["current.session"]; } set { HttpContext.Current.Items["current.session"] = value; } } protected void Global() { BeginRequest += delegate { CurrentSession = SessionFactory.OpenSession(); }; EndRequest += delegate { if(CurrentSession != null) CurrentSession.Dispose(); }; } } What’s the equivalent in Subsonic? The way I understand, Nhibernate will close all the connections at endrequest. Reason: While trouble shooting some legacy code in a Subsonic project I get a lot of MySQL timeouts,suggesting that the code is not closing the connections MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Generated: Tue, 11 Aug 2009 05:26:05 GMT System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at SubSonic.MySqlDataProvider.CreateConnection(String newConnectionString) at SubSonic.MySqlDataProvider.CreateConnection() at SubSonic.AutomaticConnectionScope..ctor(DataProvider provider) at SubSonic.MySqlDataProvider.GetReader(QueryCommand qry) at SubSonic.DataService.GetReader(QueryCommand cmd) at SubSonic.ReadOnlyRecord`1.LoadByParam(String columnName, Object paramValue) My connection string is as follows <connectionStrings> <add name="xx" connectionString="Data Source=xx.net; Port=3306; Database=db; UID=dbuid; PWD=xx;Pooling=true;Max Pool Size=12;Min Pool Size=2;Connection Lifetime=60" /> </connectionStrings>

    Read the article

  • Problem with sfRemember cookie / sfGuard Remember me

    - by Tom
    I'm using Symfony 1.4 with Doctrine. Sorry if this is a silly question but what exactly does one need to build on top of the sfDoctrineGuardPlugin to get the "remember me" functionality working? When I login a user, the sfRemember cookie is created with the default 15-day lifetime, and the remember key is saved in the plugin's sf_guard_remember_key table. Without any tweaks to the plugin, the sfGuardSecurityUser SignIn() method creates the cookie, but the Signout() method erases it, leaving no cookie unless you're logged in! Signin(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, $key, time() + $expiration_age); Signout(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, '', time() - $expiration_age); I can see that the database table saves the cookie as a relation of sf_guard_user, but that's not much good if the cookie is gone.... I'd be grateful if someone could tell me what I'm missing here, and ideally, if I prevent the Signout() method from removing the cookie, do I need to write code to read the cookie myself or is this automated somewhere/somehow? I've got box-standard Symfony 1.4 and sfDoctrineGuardPlugin installations. It all just seems totally wrong and the documentation on this is non-existent. Any help would appreciated.

    Read the article

  • How to use references, avoid header bloat, and delay initialization?

    - by Kyle
    I was browsing for an alternative to using so many shared_ptrs, and found an excellent reply in a comment section: Do you really need shared ownership? If you stop and think for a few minutes, I'm sure you can pinpoint one owner of the object, and a number of users of it, that will only ever use it during the owner's lifetime. So simply make it a local/member object of the owners, and pass references to those who need to use it. I would love to do this, but the problem becomes that the definition of the owning object now needs the owned object to be fully defined first. For example, say I have the following in FooManager.h: class Foo; class FooManager { shared_ptr<Foo> foo; shared_ptr<Foo> getFoo() { return foo; } }; Now, taking the advice above, FooManager.h becomes: #include "Foo.h" class FooManager { Foo foo; Foo& getFoo() { return foo; } }; I have two issues with this. First, FooManager.h is no longer lightweight. Every cpp file that includes it now needs to compile Foo.h as well. Second, I no longer get to choose when foo is initialized. It must be initialized simultaneously with FooManager. How do I get around these issues?

    Read the article

  • Newtonsoft JSON Interface Serialization error

    - by Ben
    I am using C# .NET 4.0, Newtonsoft JSON 4.5.0. public class Recipe { [JsonProperty(TypeNameHandling = TypeNameHandling.All)] public List<IFood> Foods{ get; set; } ... } I want to serialize and deserialize this Recipe object. If I serialize and deserialize the object during application lifetime this succeeds, but if I serialize the object, exit application and then deserialize it then it throws an exception, that it cannot instantiate IFood (since it is an interface). The problem is that it does not serialize the implementation of interface. "$type": "System.Collections.Generic.List`1[[NSM.Shared.Models.IFood, NSMShared]], mscorlib" I tried using TypeNameHandling.Object and Array and Auto, but it didn't help. Is there any way to serialize it properly? Or at least to define the class mapping before deserializing? EDIT: I am using JSON coupled with Hammock ( http://code.google.com/p/relax-net/ ), C# driver for CouchDB, which internally serializes and deserializes objects. As mentioned the problem is that it does not serialize the interface implementation.

    Read the article

  • How to use boost::bind with non-copyable params, for example boost::promise ?

    - by zhengxi
    Some C++ objects have no copy constructor, but have move constructor. For example, boost::promise. How can I bind those objects using their move constructors ? #include <boost/thread.hpp> void fullfil_1(boost::promise<int>& prom, int x) { prom.set_value(x); } boost::function<void()> get_functor() { // boost::promise is not copyable, but movable boost::promise<int> pi; // compilation error boost::function<void()> f_set_one = boost::bind(&fullfil_1, pi, 1); // compilation error as well boost::function<void()> f_set_one = boost::bind(&fullfil_1, std::move(pi), 1); // PS. I know, it is possible to bind a pointer to the object instead of // the object itself. But it is weird solution, in this case I will have // to take cake about lifetime of the object instead of delegating that to // boost::bind (by moving object into boost::function object) // // weird: pi will be destroyed on leaving the scope boost::function<void()> f_set_one = boost::bind(&fullfil_1, boost::ref(pi), 1); return f_set_one; }

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • How do I start up an NSRunLoop, and ensure that it has an NSAutoreleasePool that gets emptied?

    - by Nick Forge
    I have a "sync" task that relies on several "sub-tasks", which include asynchronous network operations, but which all require access to a single NSManagedObjectContext. Due to the threading requirements of NSManagedObjectContexts, I need every one of these sub-tasks to execute on the same thread. Due to the amount of processing being done in some of these tasks, I need them to be on a background thread. At the moment, I'm launching a new thread by doing this in my singleton SyncEngine object's -init method: [self performSelectorInBackground:@selector(initializeSyncThread) withObject:nil]; The -initializeSyncThread method looks like this: - (void)initializeSyncThread { self.syncThread = [NSThread currentThread]; self.managedObjectContext = [(MyAppDelegate *)[UIApplication sharedApplication].delegate createManagedObjectContext]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop run]; } Is this the correct way to start up the NSRunLoop for this thread? Is there a better way to do it? The run loop only needs to handle 'performSelector' sources, and it (and its thread) should be around for the lifetime of the process. When it comes to setting up an NSAutoreleasePool, should I do this by using Run Loop Observers to create the autorelease pool and drain it after every run-through?

    Read the article

  • What is the basic pattern for using (N)Hibernate?

    - by Vilx-
    I'm creating a simple Windows Forms application with NHibernate and I'm a bit confused about how I'm supposed to use it. To quote the manual: ISession (NHibernate.ISession) A single-threaded, short-lived object representing a conversation between the application and the persistent store. Wraps an ADO.NET connection. Factory for ITransaction. Holds a mandatory (first-level) cache of persistent objects, used when navigating the object graph or looking up objects by identifier. Now, suppose I have the following scenario: I have a simple classifier which is a MSSQL table with two columns - ID (auto_increment) and Name (nvarchar). To edit this classifier I create a form which contains a single gridview and two buttons - OK and Cancel. The user can nearly directly edit the table in the gridview, and when he hits OK the changes he made are persisted to the DB (or if he hits cancel, nothing happens). Now, I have several questions about how to organize this: What should the lifetime of my ISession be? Should I create a single ISession for my whole application; an ISession for each of my forms (the application is single-threaded MDI); or an ISession for every DB operation/transaction? Does NHibernate offer some kind of built-in dirty tracking or must I do this myself? The manual mentions something like it here and there but does not go into details. How is this done? Is there not a huge overhead? Is it somehow tied with the cache(s) that NHibernate has? What are these caches for? Are they not specific to a single ISession? That is, if I use a seperate ISession for every transaction, won't it break the dirty tracking? How does the built-in dirty tracking detect deleted objects?

    Read the article

  • Using generics in Unity ... InvalidCastException

    - by Sunny D
    Hi, My interface definition is: public interface IInterface where T:UserControl My class definition is: public partial class App1Control : UserControl, IInterface The unity section of my app.config looks as below: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="myInterface" type="MyApplication.IInterface`1, MyApplication" /> <typeAlias alias="App1" type="MyApplication.App1Control, MyApplication" /> </typeAliases> <containers> <container> <types> <type type="myInterface" mapTo="App1" name="Application 1"> <lifetime type="singleton"/> </type> </types> </container> </containers> </unity> The app runs fine but, the following code gives a InvalidCastException container.Resolve<IInterface<UserControl>>("Application 1"); The error message is : Unable to cast object of type 'MyApplication.App1Control' to type 'MyApplication.IInterface`1[System.Windows.Forms.UserControl]' I believe there is a minor mistake in my code ... but am not able to figure out what. Any thoughts?

    Read the article

  • Password Cracking in 2010 and Beyond

    - by mttr
    I have looked a bit into cryptography and related matters during the last couple of days and am pretty confused by now. I have a question about password strength and am hoping that someone can clear up my confusion by sharing how they think through the following questions. I am becoming obsessed about these things, but need to spend my time otherwise :-) Let's assume we have an eight-digit password that consists of upper and lower-case alphabetic characters, numbers and common symbols. This means we have 8^96 ~= 7.2 quadrillion different possible passwords. As I understand there are at least two approaches to breaking this password. One is to try a brute-force attack where we try to guess each possible combination of characters. How many passwords can modern processors (in 2010, Core i7 Extreme for eg) guess per second (how many instructions does a single password guess take and why)? My guess would be that it takes a modern processor in the order of years to break such a password. Another approach would consist of obtaining a hash of my password as stored by operating systems and then search for collisions. Depending on the type of hash used, we might get the password a lot quicker than by the bruteforce attack. A number of questions about this: Is the assertion in the above sentence correct? How do I think about the time it takes to find collisions for MD4, MD5, etc. hashes? Where does my Snow Leopard store my password hash and what hashing algorithm does it use? And finally, regardless of the strength of file encryption using AES-128/256, the weak link is still my en/decryption password used. Even if breaking the ciphered text would take longer than the lifetime of the universe, a brute-force attack on my de/encryption password (guess password, then try to decrypt file, try next password...), might succeed a lot earlier than the end of the universe. Is that correct? I would be very grateful, if people could have mercy on me and help me think through these probably simple questions, so that I can get back to work.

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • Why does Microsoft advise against readonly fields with mutable values?

    - by Weeble
    In the Design Guidelines for Developing Class Libraries, Microsoft say: Do not assign instances of mutable types to read-only fields. The objects created using a mutable type can be modified after they are created. For example, arrays and most collections are mutable types while Int32, Uri, and String are immutable types. For fields that hold a mutable reference type, the read-only modifier prevents the field value from being overwritten but does not protect the mutable type from modification. This simply restates the behaviour of readonly without explaining why it's bad to use readonly. The implication appears to be that many people do not understand what "readonly" does and will wrongly expect readonly fields to be deeply immutable. In effect it advises using "readonly" as code documentation indicating deep immutability - despite the fact that the compiler has no way to enforce this - and disallows its use for its normal function: to ensure that the value of the field doesn't change after the object has been constructed. I feel uneasy with this recommendation to use "readonly" to indicate something other than its normal meaning understood by the compiler. I feel that it encourages people to misunderstand the meaning of "readonly", and furthermore to expect it to mean something that the author of the code might not intend. I feel that it precludes using it in places it could be useful - e.g. to show that some relationship between two mutable objects remains unchanged for the lifetime of one of those objects. The notion of assuming that readers do not understand the meaning of "readonly" also appears to be in contradiction to other advice from Microsoft, such as FxCop's "Do not initialize unnecessarily" rule, which assumes readers of your code to be experts in the language and should know that (for example) bool fields are automatically initialised to false, and stops you from providing the redundancy that shows "yes, this has been consciously set to false; I didn't just forget to initialize it". So, first and foremost, why do Microsoft advise against use of readonly for references to mutable types? I'd also be interested to know: Do you follow this Design Guideline in all your code? What do you expect when you see "readonly" in a piece of code you didn't write?

    Read the article

  • How to use a VB usercontrol on a C# page?

    - by ks78
    Hopefully someone will be able to point me in the right direction. I've created a usercontrol in VB that handles paging more efficiently than the DataPager (at least for very large datasets). I'd like to use it in a C# project, but I've been having trouble getting it to work. I've tried simply adding PagingControl.ascx to the C# project, but when I do that the markup and VB code behind don't seem to see each other. --Is this a namespace issue? I've tried adding the PagingControl.ascx to its own VB project, then adding that project to the C# project's solution, as well as a reference. --That almost works. I can register the PagingControl usercontrol in the markup. I can access the usercontrol's properties in the code behind, but any property that involves the UI of the usercontrol fails. Its seems as if the usercontrol's form hasn't had a chance to load by the time the C# page's Page_Load event handler fires. --Maybe this is an "order of operations" problem? At what point in the C# page's lifetime should a usercontrol's form be loaded? If anyone has any ideas or insight, I'd really appreciate it. Thanks in advance.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20  | Next Page >