Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 309/992 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Clint Edmonson Talks Season of Launch | AJI Report #11

    - by Jeff Julian
    We are back in the office for another installment of AJI Report where we talk with Clint Edmonson of Microsoft about their Season of Launch events. We get into Windows Azure, Windows 8, and Visual Studio 2012 and how developers and decision makers can learn more about the new products. Clint is an amazing resource for the Central Region and is very responsive if you have questions about products or integration. Clint makes a great offer to help you with your applications during the Hackathon events coming up. Listen to the Show Site: Not So Trivial Twitter: @ClintEd

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Using Window Handle to disable Mouse clicks and Keyboard Inputs using c#

    - by srk
    I need to disable the Mouse Clicks, Mouse movement and Keyboard Inputs for a specific windows for a Kiosk application. Is it Feasible in C# ? I have removed the menu bar and title bar of a specific window, will that be a starting point to achieve the above requirement ? The code for removing the menu bar and title bar using window handle : #region Constants //Finds a window by class name [DllImport("USER32.DLL")] public static extern IntPtr FindWindow(string lpClassName, string lpWindowName); //Sets a window to be a child window of another window [DllImport("USER32.DLL")] public static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent); //Sets window attributes [DllImport("USER32.DLL")] public static extern int SetWindowLong(IntPtr hWnd, int nIndex, int dwNewLong); //Gets window attributes [DllImport("USER32.DLL")] public static extern int GetWindowLong(IntPtr hWnd, int nIndex); [DllImport("user32.dll", EntryPoint = "FindWindow", SetLastError = true)] static extern IntPtr FindWindowByCaption(IntPtr ZeroOnly, string lpWindowName); [DllImport("user32.dll")] static extern IntPtr GetMenu(IntPtr hWnd); [DllImport("user32.dll")] static extern int GetMenuItemCount(IntPtr hMenu); [DllImport("user32.dll")] static extern bool DrawMenuBar(IntPtr hWnd); [DllImport("user32.dll")] static extern bool RemoveMenu(IntPtr hMenu, uint uPosition, uint uFlags); //assorted constants needed public static uint MF_BYPOSITION = 0x400; public static uint MF_REMOVE = 0x1000; public static int GWL_STYLE = -16; public static int WS_CHILD = 0x40000000; //child window public static int WS_BORDER = 0x00800000; //window with border public static int WS_DLGFRAME = 0x00400000; //window with double border but no title public static int WS_CAPTION = WS_BORDER | WS_DLGFRAME; //window with a title bar public static int WS_SYSMENU = 0x00080000; //window menu #endregion public static void WindowsReStyle() { Process[] Procs = Process.GetProcesses(); foreach (Process proc in Procs) { if (proc.ProcessName.StartsWith("notepad")) { IntPtr pFoundWindow = proc.MainWindowHandle; int style = GetWindowLong(pFoundWindow, GWL_STYLE); //get menu IntPtr HMENU = GetMenu(proc.MainWindowHandle); //get item count int count = GetMenuItemCount(HMENU); //loop & remove for (int i = 0; i < count; i++) RemoveMenu(HMENU, 0, (MF_BYPOSITION | MF_REMOVE)); //force a redraw DrawMenuBar(proc.MainWindowHandle); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_SYSMENU)); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_CAPTION)); } } }

    Read the article

  • Google Checkout, OpenId, and downloadable products

    - by craigmoliver
    I going to use Google Checkout to process orders to purchase downloadable content. When the order process is completed via Google Checkout I'd like for the user to be able come back to my site, authenticate using their Google credentials (OpenID?) that they purchased the item with linked back end, and download the goods. The site is written using C# and ASP.NET MVC. Is this possible or how should I rethink this? Are there open-source libraries to get me started?

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Hosted version of Yahoo! answers

    - by Neil
    Hi all. Does anyone know of a hosted version of Yahoo! Answers (or Stackoverflow/Superuser) that I could integrate with my site? I know that there are some open-source implementations (see http://meta.stackoverflow.com/questions/2267/stack-overflow-clones?page=1&tab=votes#tab-top) but I'd rather have some hosted if possible. I know there is http://stackexchange.com/ as well, but I really want some tight integration with the rest of my site. Failing that, has anyone got any experience with the open source versions? Some of them look a little, erm, unfinished... Thanks. Neil.

    Read the article

  • java share data between thread

    - by ayush
    i have a java process that reads data from a socket server. Thus i have a BufferedReader and a PrintWriter object corresponding to that socket. Now in the same java process i have a multithreaded java server that accepts client connections. I want to achieve a functionality where all these clients that i accept can read data from the BufferedReader object that i mentioned above.(so that they can multiplex the data) How do i make these individual client threads read the data from BuffereReader single object? Sorry for the confusion.

    Read the article

  • Microsoft Detours

    - by Bruce
    I am new to Microsoft Detours. I have installed it to trace the system calls a process makes. I run the following commands which I got from the web syelogd.exe /q C:\Users\xxx\Desktop\log.txt withdll.exe /d:traceapi.dll C:\Program Files\Google\Google Talk\googletalk.exe I get the log file. The problem is I don't fully understand what is happening here. How does detours work? How does it trace the system calls? Also I don't know how to read the output in log.txt. Here is one line in log.txt 20101221060413329 2912 50.60: traceapi: 001 GetCurrentThreadId() Finally I want to get the stack trace of the process. How can I get that?

    Read the article

  • How can I use Code Contracts in a C++/CLI project?

    - by Daniel Wolf
    I recently stumbled upon Code Contracts and have started using them in my C# projects. However, I also have a number of projects written in C++/CLI. For C# and VB, Code Contracts offer a handy configuration panel in the project properties dialog. For a C++/CLI project, there is no such panel. From the documentation, I got the impression that adding Code Contracts support to a C++/CLI project should be a simple matter of calling some external tools as part of the build process (namely ccrefgen.exe, cccheck.exe, and ccrewrite.exe). However, the number of command line options and restrictions concerning the call sequence have me somewhat intimidated. Can anybody point me to a simple way to run the Code Contracts tools as an automated part of the build process in Visual Studio?

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • What happens when you run out of ram with mlockall set?

    - by James Dean
    I am working on a C++ application that requires a large amounts of memory for a batch run. ( 20gb) Some of my customers are running into memory limits where sometimes the OS starts swapping and the total run time doubles or worse. I have read that I can use the mlockall to keep the process from being swapped out. What would happen when the process memory requirements approaches or exceeds the the available physical memory in this way? I guess the answer might be OS specific so please list the OS in your answer.

    Read the article

  • How to properly add webapps to unity on 13.10

    - by Germano Lisboa
    I'm new to ubuntu, and one of the features that made the change from windows was the integrated webapps. I installed ubuntu 13.10, installed the unity-webapps-service and the unity-chromium-extension via terminal. Once I opened Gmail, facebook, google docs, etc... they all offered me to install it on ubuntu. But all I get is the icon on the applications menu, there's no integration with the top bar or the HUD. Does anyone know how can I solve this?

    Read the article

  • Database on the fly with scripting languages

    - by afilatun
    I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. The problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules. My question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?

    Read the article

  • Is the redistributable ReportViewer 2010 RC available in other languages?

    - by pinkmuppet
    I need to deploy the language packs for the ReportViewer 2010 control (the english one is installed and working perfectly). Before, with ReportViewer 2008 and 2005, all the supported laguages were available on the MS downloads site. I can't seem to find them for the RC of 2010 -- are they available anywhere? From MSDN: To use the localized version of the ReportViewer control redistributable that comes with Visual Studio, do the following: 1.Run ReportViewer.exe. 2.Navigate to the folder that contains the language pack you want to use. Language pack folders are located at %PROGRAMFILES%\Microsoft SDKs\Windows\v7.0A\BootStrapper\Packages\ReportViewer\. 3.Run ReportViewerLP.exe. Is there a generic language pack for VS 2010 RC that would have the localized report viewers as well?

    Read the article

  • Webcast: Introducing the New Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    The Fastest Way to Virtualize Your Datacenter Join our webcast “Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM” Tues., January 25, 2011 9 a.m. PT / 12 p.m. ET Presented by: Marc Shelley, Senior Manager, Oracle Blades Product Management Tom Lisjac, Senior Member, Oracle Technical Staff Register now for our live webcast! The Oracle VM blade cluster reference configuration addresses the key challenges associated with deploying a virtualization infrastructure. It eliminates or significantly reduces the assembly and integration of the following components BY UP TO 98%: Servers Storage Network Connections Virtualization Software Operating Systems Attend this fact-filled, how-to Webcast with Oracle experts to learn the best practices for deploying the reference configuration for Oracle VM Server for x86 and Sun Blade and Sun Fire x86 rack mount servers. Virtualization is easier than ever with this new configuration. Register now for our live webcast! For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Test case as a function or test case as a class

    - by GodMan
    I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?

    Read the article

  • How can I get elements out of an array with Template Toolkit?

    - by Przemek
    I have an array of Paths which i want to read out with Template Toolkit. How can I access the array Elements of this array? The Situation is this: my @dirs; opendir(DIR,'./directory/') || die $!; @dirs = readdir(DIR); close DIR; $vars->{'Tree'} = @dirs; Then I call the Template Page like this: $template->process('create.tmpl', $vars) || die "Template process failed: ", $template->error(), "\n"; In this template I want to make an Tree of the directories in the array. How can I access them? My idea was to start with a foreach in the template like this [% FOREACH dir IN Tree.dirs %] $dir [% END %]

    Read the article

  • svnlook always returns an error and no output

    - by Pierre-Alain Vigeant
    I'm running this small C# test program launched from a pre-commit batch file private static int Test(string[] args) { var processStartInfo = new ProcessStartInfo { FileName = "svnlook.exe", UseShellExecute = false, ErrorDialog = false, CreateNoWindow = true, RedirectStandardOutput = true, RedirectStandardError = true, Arguments = "help" }; using (var svnlook = Process.Start(processStartInfo)) { string output = svnlook.StandardOutput.ReadToEnd(); svnlook.WaitForExit(); Console.Error.WriteLine("svnlook exited with error 0x{0}.", svnlook.ExitCode.ToString("X")); Console.Error.WriteLine("Current output is: {0}", string.IsNullOrEmpty(output) ? "empty" : output); return 1; } } I am deliberately calling svnlook help and forcing an error so I can see what is going on when committing. When this program run, SVN displays svnlook exited with error 0xC0000135. Current output is: empty I looked up the error 0xC0000135 and it mean App failed to initialize properly although it wasn't specific to svnhook. Why is svnlook help not returning anything? Does it fail when executed through another process?

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • Testing install procedure of a program requiring administrative privileges

    - by Lucas Meijer
    I'm trying to write automated test, to ensure that the installer for my program works okay. The program can be installed for all users (requires admin privs), or for current user (does not require admin privs). The program can also autoupdate itself, which in some cases requires admin privileges, and in some cases doesn't. I'm looking for a way where I can have an automated test click "Yes, Allow" on the UAC dialogs, so I can write tests for all different scenarios, on many different operating systems, so that I can be confident when I make changes to the installer that I didn't break anything. Obviously, the installer process itself cannot do this. However, I control the complete machine, and could easily start some sort of daemon process with administrative rights, that the testprogram could make a socket connection to, to request it to "please click ok on the UAC now".

    Read the article

  • git: Is it possible to save the packed objects of a dry run and push them later?

    - by shovavnik
    I'm trying to push a bunch of commits that contain a lot of code and a few thousand MP3 and PDF files besides (ranging from 5-40 MB each). Git successfully packs the objects: C:\MyProject> git push Counting objects: 7582, done. Delta compression using up to 2 threads. Compressing objects: 100% (7510/7510), done. But it fails to send the push for some as yet unknown reason. The problem is that it takes it a very long time to repack the files (I'm on a battery-powered laptop and it took about 20 minutes to pack). So I guess my question can be phrases thus: Is it possible to save the packed objects created in a dry run? Once saved, is it possible to push those packed objects and avoid repacking? I looked it up in the git manual and elsewhere and couldn't find anything conclusive. Any help or pointers are appreciated.

    Read the article

  • Automate Testing on future only items business rules

    - by Titan
    I currently have a business object with a validation business rule, which is it can only be created for the future, tomorrow onwards, and I cannot create new items for today. I have a process, which runs the non-future business objects through some steps.. Because I have to set things up today, and test tomorrow, and when it fails, I can only create a new object tomorrow and test the following day. Are there any easy ways to automate this process in any testing frameworks? I think our testers are using the visual studio 2010 test manager. How do you guys manage situations like this? Cheers

    Read the article

  • Init modules in apache2

    - by user306963
    Hello, I used to write apache modules in apache 1.3, but these days I am willing to pass to apache2. The module that I am writing at the moment has is own binary data, not a database, for performance purposes. I need to load this data in shared memory, so every child can access it without making his own copy, and it would be practical to load/create the binary data at startup, as I was used to do with apache 1.3. Problem is that I don't find an init event in apache2, in 1.3 in the module struct, immediatly after STANDARD_MODULE_STUFF you find a place for a /** module initializer */, in which you can put a function that will be executed early. Body of the function I used to write is something like: if ( getppid == 1 ) { // Load global data here // this is the parent process void* data = loadGlobalData( someFilePath ); setGlobalData( config, data ); } else { // this is the init of a child process // do nothing } I am looking for a place in apache2 in where I can put a similar function. Can you help? Thanks Benvenuto

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • The best Windows 7 virtual desktop tool by far&hellip; Dexpot

    - by Eric Nelson
    [Oh – and Windows XP, Vista etc] Every so often I yearn for the virtual desktop functionality that is implemented so well under Linux. Unfortunately every time I start looking for a great tool for Windows I ultimately end up disappointed. But … I think this time around I have actually found one that will outlast the first day or two and become a must have. Check out http://www.dexpot.de/ So far this is 100% stable, 100% sensible and offers awesome functionality, yet still is very simple to use. There is a detailed look at the many features on the site but a couple that do it for me: Desktop Manager and next/previous tray icons make it easy to navigate around: Announcement of Desktop as a desktop takes focus: And best of all, Windows 7 preview integration And… it is FREE for private use and you get 30 days to try it out for professional use (e.g. me)

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >