Search Results

Search found 10433 results on 418 pages for 'session replication'.

Page 35/418 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • SOAP and NHibernate Session in C#

    - by Anonymous Coward
    In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session: [...] public Services : Base { private User user; [...] public string myWS(string username, string password) { if( Auth(username, password) ) { [...] } } } public Base : WebService { protected static ISessionFactory sesFactory; protected static ISession session; static Base { Configuration conf = new Configuration(); [...] sesFactory = conf.BuildSessionFactory(); } private bool Auth(...) { session = sesFactory.OpenSession(); MembershipUser user = null; if (UserCredentials != null && Membership.ValidateUser(username, password)) { luser = Membership.GetUser(username); } ... try { user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); } catch { user = null; throw new [...] } return user != null; } } When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB. The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent. How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way? Thanks Update1 Yes, mSession is 'session'. Update2 Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following: var return_value = [...]; try { using(ITransaction tx = session.Transaction) { tx.Begin(); MyType obj = new MyType(); user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens. session.Save(user); session.Save(obj); tx.Commit(); return_value = obj.another_field; } } catch ([...]) { // Handling exceptions... } finally { // Clean up session.Flush(); session.Close(); } return return_value; All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy). It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

    Read the article

  • SQL SERVER – Online Session on What is New in Denali – Today Online

    - by pinaldave
    I will be presenting today on subject Inside of Next Generation SQL Server – Denali online at Zeollar.com. This sessions are really fun as they are online, downloadable, and 100% demo oriented. I will be using SQL Server ‘Denali’ CTP 1 to present on the subject of What is New in Denali. The webcast will start at 12:30 PM sharp and will end at 1 PM India Time. It will be 100% demo oriented and no slides. I will be covering following topics in the session. SQL SERVER – Denali Feature – Zoom Query Editor SQL SERVER – Denali – Improvement in Startup Options SQL SERVER – Denali – Clipboard Ring – CTRL+SHIFT+V SQL SERVER – Denali – Multi-Monitor SSMS Windows SQL SERVER – Denali – Executing Stored Procedure with Result Sets SQL SERVER – Performance Improvement with of Executing Stored Procedure with Result Sets in Denali SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases SQL SERVER – Denali – ObjectID in Negative – Local TempTable has Negative ObjectID SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison SQL SERVER – Denali – SEQUENCE is not IDENTITY SQL SERVER – Denali – Introduction to SEQUENCE – Simple Example of SEQUENCE If time permits we will cover few more topics as well. The session will be recorded as well. My earlier session on the Topic of Best Practices Analyzer is also available to watch online here: SQL SERVER – Video – Best Practices Analyzer using Microsoft Baseline Configuration Analyzer Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • HYUNDAI @ Oracle Open World 2012 General Session (GEN9449): Engineered Systems - From Vision to Game-Changing Results

    - by Sanjeev Sharma
     Why do data centers still demand an “assembly required” approach? This necessity  proves costly and complex, forces customers to deal with a wide range of vendors  for each  application, and fails to deliver performance optimization for application and data  workloads.  Oracle believes that systems (just like automobiles) should be designed and engineered “at the  factory” with the goal of reducing customers’ costs and complexity and delivering extreme performance, reliability, availability, and simplicity with a higher degree of automation. Hyundai Motor Company was founded in 1967 and since then has become a global brand in the automotive industry. Hyundai Motor Company’s was looking for a solution to manage its intellectual capital by capturing and facilitating re-use of knowledge of its thousands of employees. To achieve this Hyundai Motor Company set out to build a centralized document management platform that will allow its 30,000 knowledge workers to collaborate by sharing documents in a secure manner, anytime, anywhere. Furthermore this new knowledge management platform would bring about significant improvements in employee productivity.  Hear senior business leaders from Hyundai speak about the role and benefits of running their knowledge management platform on the Oracle family of engineered systems at the following general session at Oracle Open World 2012: Session: GEN9499 - General Session: Engineered Systems—From Vision to Game-Changing Results Date: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone West (2002 / 2004)

    Read the article

  • Session state provider and atomic operations

    - by vtortola
    Hi, I've been thinking about this and it is blowing my mind... How does a session state provider properly works internally? I mean, I tried to write a custom session state provider based on Azure Tables or Blobs, but quickly I realized that because there is no way to ensure an atomic operation or establish a lock, race conditions are suitable to happen when several web servers do operation on that shared information. I know that there is a SQL Server Session State Provider (SQLS-SSP) and people is happy with it, so I guess that it's using some kind of transaction isolation level in order to accomplish some degree of concurrent safety, like checking is the data is lock (a simple column), locking it if not and returning the data in an atomic operation, but is that so? what does happen if the data is lock? does it returns an error? block the call for a while? returns it in read-only fashion? Cloud computing paradigms could be somehow new, but webfarms have been here for a while, so as I'm pretty new on it... do you recommend any good lecture about the topic? Thanks.

    Read the article

  • Spotlight on GlassFish 4.1: #7 WebSocket Session Throttling and JMX Monitoring

    - by delabassee
    'Spotlight on GlassFish 4.1' is a series of posts that highlights specific enhancements of the upcoming GlassFish 4.1 release. It could be a new feature, a fix, a behavior change, a tip, etc. #7 WebSocket Session Throttling and JMX Monitoring GlassFish 4.1 embeds Tyrus 1.8.1 which is compliant with the Maintenance Release of JSR 356 ("WebSocket API 1.1"). This release also brings brings additional features to the WebSocket support in GlassFish. JMX Monitoring: Tyrus now exposes WebSocket metrics through JMX . In GF 4.1, the following message statistics are monitored for both sent and received messages: messages count messages count per second average message size smallest message size largest message size Those statistics are collected independently of the message type (global count) and per specific message type (text, binary and control message). In GF 4.1, Tyrus also monitors, and exposes through JMX, errors at the application and endpoint level. For more information, please check Tyrus JMX Monitoring Session Throttling To preserve resources on the server hosting websocket endpoints, Tyrus now offers ways to limit the number of open sessions. Those limits can be configured at different level: per whole application per endpoint per remote endpoint address (client IP address)   For more details, check Tyrus Session Throttling. The next entry will focus on Tyrus new clients-side features.

    Read the article

  • Shutting down Ubuntu 11.10 with power button without x11-session

    - by RJdaMoD
    when pressing the power-button inside a (gnome-)session, ubuntu asks me what to do and shuts down after 60 seconds anyway. No problem so far. But if i'm not logged in in a gnome-session (for example in the login screen), or just change to a tty, then the power-button won't work. But i remember that i worked in 11.04. So what's changed and how to restore? Background: I use my machine as a print server. If im not home and my wife wants to print sth., she used to switch on my machine, print via her laptop, and then just shut it down by power-button. Beginning of march i was on a business tour, and she called me that she could not shutdown my machine anymore. I shut it down by ssh, but this seems not the favorable way to me. I already had a look in /etc/acpi/powerbtn.sh and think that the line if pidof x $PMS > /dev/null; then exit is the cause for this since it aborts the script when no gui-power-manager is found. Is that right? But that does not explain with the power-button does not work when switching from the x11-session to a tty, although this would not be critical to me. Thanks in advance Greetings RJ

    Read the article

  • BUILD 2013 Session&ndash;Alive With Activity

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashalive-with-activity.aspx Live tiles are what really add a ton of value to both Windows 8 and Windows Phone.  As a developer it is important that you leverage this capability in order to make your apps more informative and give your users a reason to keep opening the app to find out details hinted at by tile updates. In this session Kraig Brockschmidt cover a wide array of dos and don’ts for implementing live tiles.  I was actually worried whether I would get much out of this session when Kraig started it off with the fact that his background is in HTML5 based apps which I have little interest in, but the subject almost didn’t come up during his talk.  It focused on things like making sure you have all the right size graphics and implementing all of the tile event handlers.  The session went on to discuss the message format for push notification and implementing lock screen notification and badges. As with the other day 1 sessions it was like drinking from a fire hose, but it was good stuff.  Check it out when they post it on Channel 9. del.icio.us Tags: BUILD 2013,Live Tiles,Windows 8.1

    Read the article

  • How to store Role Based Access rights in web application?

    - by JonH
    Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case Role1 has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: List<Role> roles; Where the Role class would have some sort of List<Module> modules; (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List<Module> modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List<Action> actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a List<x, y>. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great.

    Read the article

  • Session state provider and global.asax not interacting properly?

    - by yodaj007
    I'm experimenting with creating a crude, proof-of-concept session state store provider in ASP.Net. But I've got a problem and I'm not sure what to do about it. The website works properly when using the InProc provider. The Session_Start in global.asax is called on session creation as it should. But not if I implement my own provider. The Session_Start method from global.asax isn't being called at all if a new session is being created (that is, I delete the session state file). Am I missing something important here? public class TestSessionProvider : SessionStateStoreProviderBase { private const string ROOT = "c:\\projects\\sessions\\"; private const int TIMEOUT_MINUTES = 30; public string ApplicationName { get { return HostingEnvironment.ApplicationVirtualPath; } } private string GetFilename(string id) { string filename = String.Format("{0}_{1}.session", ApplicationName, id); char[] invalids = Path.GetInvalidPathChars(); for (int i = 0; i < invalids.Length; i++) { filename = filename.Replace(invalids[i], '_'); } return Path.Combine(ROOT, Path.GetFileName(filename)); } public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (String.IsNullOrEmpty(name)) name = "Sporkalicious"; base.Initialize(name, config); } public override SessionStateStoreData CreateNewStoreData(HttpContext context, int timeout) { SessionStateItemCollection items = new SessionStateItemCollection(); HttpStaticObjectsCollection objects = SessionStateUtility.GetSessionStaticObjects(context); return new SessionStateStoreData(items, objects, TIMEOUT_MINUTES); } /// <summary> /// The CreateUninitializedItem method is used with cookieless sessions when the regenerateExpiredSessionId /// attribute is set to true, which causes SessionStateModule to generate a new SessionID value when an /// expired session ID is encountered. /// </summary> /// <param name="context"></param> /// <param name="id"></param> /// <param name="timeout"></param> public override void CreateUninitializedItem(HttpContext context, string id, int timeout) { FileStream fs = File.Open(GetFilename(id), FileMode.CreateNew, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = new SessionStateItemCollection(); coll.Serialize(writer); fs.Flush(); fs.Close(); } public override SessionStateStoreData GetItem(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { return GetItemExclusive(context, id, out locked, out lockAge, out lockId, out actions); } public override SessionStateStoreData GetItemExclusive(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { locked = false; lockAge = TimeSpan.FromDays(1); lockId = 0; actions = SessionStateActions.None; if (!File.Exists(GetFilename(id))) { return null; } FileStream fs = File.Open(GetFilename(id), FileMode.Open, FileAccess.ReadWrite, FileShare.Read); BinaryReader reader = new BinaryReader(fs); SessionStateItemCollection coll = SessionStateItemCollection.Deserialize(reader); fs.Close(); return new SessionStateStoreData(coll, new HttpStaticObjectsCollection(), TIMEOUT_MINUTES); } public override void ReleaseItemExclusive(HttpContext context, string id, object lockId) { } public override void RemoveItem(HttpContext context, string id, object lockId, SessionStateStoreData item) { File.Delete(GetFilename(id)); } public override void ResetItemTimeout(HttpContext context, string id) { } public override void SetAndReleaseItemExclusive(HttpContext context, string id, SessionStateStoreData item, object lockId, bool newItem) { if (!File.Exists(GetFilename(id))) { CreateUninitializedItem(context, id, 10); } FileStream fs = File.Open(GetFilename(id), FileMode.Truncate, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = (SessionStateItemCollection)item.Items; coll.Serialize(writer); fs.Flush(); fs.Close(); } public override bool SetItemExpireCallback(SessionStateItemExpireCallback expireCallback) { return false; } public override void InitializeRequest(HttpContext context){} public override void EndRequest(HttpContext context){} public override void Dispose(){} }

    Read the article

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Can I access Spring session-scoped beans from application-scoped beans? How?

    - by Corvus
    I'm trying to make this 2-player web game application using Spring MVC. I have session-scoped beans Player and application-scoped bean GameList, which creates and stores Game instances and passes them to Players. On player creates a game and gets its ID from GameList and other player sends ID to GameList and gets Game instance. The Game instance has its players as attributes. Problem is that each player sees only himself instead of the other one. Example of what each player sees: First player (Alice) creates a game: Creator: Alice, Joiner: Empty Second player (Bob) joins the game: Creator: Bob, Joiner: Bob First player refreshes her browser Creator: Alice, Joiner: Alice What I want them to see is Creator: Alice, Joiner: Bob. Easy way to achieve this is saving information about players instead of references to players, but the game object needs to call methods on its player objects, so this is not a solution. I think it's because of aop:scoped-proxy of session-scoped Player bean. If I understand this, the Game object has reference to proxy, which refers to Player object of current session. Can Game instance save/access the other Player objects somehow? beans in dispatcher-servlet.xml: <bean id="userDao" class="authorization.UserDaoFakeImpl" /> <bean id="gameList" class="model.GameList" /> <bean name="/init/*" class="controller.InitController" > <property name="gameList" ref="gameList" /> <property name="game" ref="game" /> <property name="player" ref="player" /> </bean> <bean id="game" class="model.GameContainer" scope="session"> <aop:scoped-proxy/> </bean> <bean id="player" class="beans.Player" scope="session"> <aop:scoped-proxy/> </bean> methods in controller.InitController private GameList gameList; private GameContainer game; private Player player; public ModelAndView create(HttpServletRequest request, HttpServletResponse response) throws Exception { game.setGame(gameList.create(player)); return new ModelAndView("redirect:game"); } public ModelAndView join(HttpServletRequest request, HttpServletResponse response, GameId gameId) throws Exception { game.setGame(gameList.join(player, gameId.getId())); return new ModelAndView("redirect:game"); } called methods in model.gameList public Game create(Player creator) { Integer code = generateCode(); Game game = new Game(creator, code); games.put(code, game); return game; } public Game join(Player joiner, Integer code) { Game game = games.get(code); if (game!=null) { game.setJoiner(joiner); } return game; }

    Read the article

  • Error during GENERAL_REQUEST_ENTITY for POST results in ASP .NET session state never getting unlocked

    - by Jesse
    I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should. I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics: They are all 'POST' requests where the browser is posting data to be processed/saved. They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected. They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data. At some point during the 'read entity' related events and error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)". Once the error has been encountered, they all jump directly to the END_REQUEST events. The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module. My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the RELEASE_REQUEST_STATE event. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also hit RELEASE_REQUEST_STATE as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point. Are there any configuration changes I could make to help ensure that 'RELEASE_REQUEST_STATE' is fired under all error conditions?

    Read the article

  • Custom LightDM sessions to launch an application

    - by zachtib
    I'm trying to set up Ubuntu to act as a kiosk running a custom application, and am trying to get a LightDM session built to automatically start it. Ideally, I'd like to have two sessions available from LightDM. The default would start my application fullscreened, and the other would open a minimal desktop in case any configuration (mostly connecting to a wireless network) needed to be done. I've done a lot of research over the last week on custom LightDM & Gnome sessions. I've got a custom greeter written for LightDM that can start either session, but I can't figure out how to add a specific application to the Gnome session that is ultimately started without just putting a launcher in the global startup directory, and I don't want to do that since I don't want the application starting when they open "configuration mode". Another problem I've run into on my current workaround is that the application doesn't fullscreen properly, which makes me think I'm not starting enough of a gnome session (currently it's just metacity, no panel or anything else). Edit: I found a solution. See http://www.webupd8.org/2011/11/make-applications-autostart-only-in.html

    Read the article

  • Forcing DFS replication on Windows 2003 Web Edition SP2

    - by Greg B
    I have a pair of web servers running Windows Server 2003 Web Edition SP2 (build 3790). There servers are a on a Gigabit LAN and are on the same subnet. I created a new link on the 1st server and added the second box as a target to the link. I configured replication as ring. The 2nd server gets some of the files on the scheduled replication but this varies wildly which makes me think it's not running fully. The folder on server 1 is 1.3GB. Is there some way I can force repication (from a console?) and monitor the progress and see if/when/why it fails.

    Read the article

  • Linux stretch cluster: MD replication, DRBD or Veritas?

    - by PieterB
    For the moment there's a lot of choices for setting up a Linux cluster. For cluster manager: you can use Red Hat Cluster manager, Pacemaker or Veritas Cluster Server. The first one has the most momentum, the second one comes by default with RH subscriptions and the last one is very expensive and has a very good reputation ;-) For storage: - You can replicate LUN's using software raid / md device - You can use the network using DRBD replication, which offers a bit more flexibility - You can use Veritas Storage Foundation technology to talk to your SANs replication technology. Anyone has any recommandations or experience with these technologies?

    Read the article

  • Sql server 2000 replication error

    - by Renato
    Good morning, I have a sql server 2000 machine, with SP4. I have a transactional replication. A keep receving this error message: The process could not execute 'sp_replcmds' on 'servername' the logreader stops responding.When i click start it starts fine, and the replication starts fine.And then, works for hours, and the problem comes back. In the begining i though it could be timeouts, but i already set a couple of parameters in the logreader profile,like querytimeout/readbatchsize. Sometimes when the logreader stops,it generates a dump, but not always. In event viewer, it appears this message: 17066 : SQL Server Assertion: File: , line=1985 Failed Assertion = 'startLSN = m_curLSN'. 18052 : Error: 3624, Severity: 20, State: 1. 17066 : SQL Server Assertion: File: , line=2223 Failed Assertion = 'm_noOfScAlloc == 0'. I also executed checkdb in the databases, and they are fine. have you every experienced something similar? thanks in advance, Renato Alves.

    Read the article

  • Linux stretch cluster: MD replication, DRBD or Veritas?

    - by PieterB
    For the moment there's a lot of choices for setting up a Linux cluster. For cluster manager: you can use Red Hat Cluster manager, Pacemaker or Veritas Cluster Server. The first one has the most momentum, the second one comes by default with RH subscriptions and the last one is very expensive and has a very good reputation ;-) For storage: - You can replicate LUN's using software raid / md device - You can use the network using DRBD replication, which offers a bit more flexibility - You can use Veritas Storage Foundation technology to talk to your SANs replication technology. Anyone has any recommandations or experience with these technologies?

    Read the article

  • SQL Server Replication Agent priority

    - by Wikser
    Every hour a server replicates SQL server data with some external web server. During this time, which takes about 2-5minutes, the database seriously slows down. Colleagues, which work with the front end applications of that on another terminal server, even regularly start complaining. The databases are also synchroniously mirrored (via SQLServer mirroring, no replication) to a third server. Note that 99% of the data is replicated outgoing, so the server should rarely need to update its data. As the (merge and transactional) replication tasks are not time-critical, I would like to reduce their priority or somehow slow them down, so they don't affect the database performance that much. How would you implement that?

    Read the article

  • RODC password replication and A/D sites and subnets

    - by Gregory Thomson
    I work at a school district with about 30 school sites. Windows 2008 A/D setup - all central at the district office. In A/D, all is under one site, and no subnets defined. One A/D forest and only one domain under that. We're now looking to start putting RODCs at the schools to put the authentication and DNS out there closer to them. I haven't worked with A/D sites and subnets, and only a little with RODC password replication. But just got an invite to a meeting to talk about this tomorrow... If we start breaking down the A/D pieces into sites/subnets, can we also use that as a way to help apply an RODC password replication policy in a way that matches so that only each school sites' users passwords are replicated/cached on their RODC?

    Read the article

  • Exchange 07 to 07 mailbox migration using local continuous replication

    - by tacos_tacos_tacos
    I have an existing Exchange Server ex0 and a fresh Exchange Server ex1, both 2007SP3. The servers are in different sites so users cannot access mailboxes on ex1 as from my understanding, a standalone CAS is required for this. I am thinking of doing the following: Enable local continous replication of the storage group on ex0 to a mapped drive that points to the corresponding storage group folder on ex1 At some point when the replication is done (small number of users and volume of mail), say on a late night on the weekend, disable CAS on ex0 (or otherwise redirect requests on the server-side from ex0 to ex1) AND change the public DNS name of the CAS so that it points to ex1. Will my plan work? If not, please explain what I can do to fix it.

    Read the article

  • Why jquery ajax calls fails after session timeout in asp.net mvc?

    - by Pandiya Chendur
    when there is a value in my session variable my ajax calls work properly... But when a session is timedout it doesn't seem to work returning empty json result.... public JsonResult GetClients(int currentPage, int pageSize) { if (Session["userId"]!=null) { var clients = clirep.FindAllClients(Convert.ToInt32(Session["userId"])).AsQueryable(); var count = clients.Count(); var results = new PagedList<ClientBO>(clients, currentPage - 1, pageSize); var genericResult = new { Count = count, Results = results ,isRedirect=false}; return Json(genericResult); } else { var genericResult = new {redirectUrl = Url.Action("Create", "Registration"), isRedirect = true }; return Json(genericResult); } } However else part does'nt seem to work.... success: function(data) { alert(data.Results); if (data.isRedirect) { window.location.href = data.redirectUrl; } } EDIT: My global.asax has this, public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Clients", "Clients/{action}/{id}", new { controller = "Clients", action = "Index", id = "" } ); routes.MapRoute( "Registrations", "{controller}/{action}/{id}", new { controller = "Registration", action = "Create", id = "" } ); }

    Read the article

  • SQLAlchemy: who is in charge of the "session"? ( and how to unit-test with sessions )

    - by Nick Perkins
    I need some guidance on how to use session objects with SQLAlchemy, and how to organize Unit Tests of my mapped objects. What I would like to able to do is something like this: thing = BigThing() # mapped object child = thing.new_child() # create and return a related object thing.save() # will also save the child object In order to achieve this, I was thinking of having the BigThing actually add itself ( and it's children ) to the database -- but maybe this not a good idea? One reason to add objects as soon as possible is Automatic id values that are assigned by the database -- the sooner they are available, the fewer problems there are ( right? ) What is the best way to manage session objects? Who is in charge of the session? Should it be created only when required? or saved for a long time? What about Unit Tests for my mapped objects?...how should the session be handled? Is it ever OK to have mapped objects just automatically add themselves to a database? or is that going to lead to trouble?

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >