Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 503/2673 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • How to reduce errors in dynamic language such as python, and improve my code quality

    - by Martin Luo
    I post the origin question in stackoverflow, some people suggest me to post here I've always have trouble with dynamic language like Python. Several problems: Typo error, I can use pylint to reduce some of these errors. But there's still some errors that pylint can not figure out. Object type error, I often forgot what type of the parameter is, int? str? some object? Also, forgot the type of some object in my code. Unit test might help me sometimes, but I'm not always have enough time to do UT. When I need a script to do a small job, the line of code are 100 - 200 lines, not big, but I don't have time to do the unit test, because I need to use the script as soon as possible. So, many errors appear. So, any idea on how to reduce the number of these problems?

    Read the article

  • Know Thy Operating System?

    - by AdityaGameProgrammer
    As developers how much time, or do you spend time, In learning the hidden features tricks of your operating system ? How important do you feel is this for productivity in day to day programming? tasks. What do you mean when you list knowledge of an OS in your resume? What are your favorite hidden -less known features For example: A common problem of How can i open the cmd window in a specific location a do it yourself solution in say xp and what to do if something breaks Are these something you look into as and when you find the need to do so?

    Read the article

  • How do I get rid of this drive mount confirmation question when booting the computer?

    - by Dave M G
    With help from this site, I was able to set up an SSHFS connection between two computers on my LAN so that one auto mounts on the other at boot time. Everything works, but there is this annoying confirmation that comes up whenever I boot: An error occurred while mounting /home/dave/Mythbuntu. Press S to skip mounting or M or Manual recovery If I press S, then booting continues, and my drive is mounted as hoped, so it seems like even though I "skipped" it, maybe it tried again and succeeded later in the boot process. I followed the instructions here to set up "if up / if down" scripts, and here is my current /etc/fstab: sshfs#[email protected]:/home/mythbuntu /home/dave/Mythbuntu fuse auto,users,exec,uid=1000,gid=1000,allow_other,reconnect,transform_symlinks,BatchMode=yes 0 0 Although the mounting is working, this step of having to press S every time I boot is obviously kind of a hassle. How do I configure my computer so I don't have to do that, and so that my other computer will still automount?

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • graphical interface when using assembly language

    - by Hellbent
    Im looking to use assembly language to make a great game, not just an average game but a really great game. I want to learn a framework to use in assembly. I know thats not possible without learning the framework in c first. So im thinking of learning sdl in c and then learn, teach myself, how to interpret the program and run it as assembly language code which shouldnt be that hard. Then i will have a window and some graphics routines to display the game while using assembly to code everything in. I need to spend some time learning sdl and then some more time learning how to code all those statements using assembly while calling c functions and knowing what registers returned calls use and what they leave etc. My question is , Is this a good way to go or is there something better to get a graphical window display using assembly language? Regards HellBent

    Read the article

  • How to show other characters in online 2D rpg

    - by Loligans
    I have Player 1 and Player 2 I am using Json to send and retrieve player data between the client and the server, but when another player logs in, and is in the same map, how would I send that data to both players to update the graphics engine to show there are 2 Players on the map? About my game it is a 2D RPG tile based game it is 24x15 Tiles it is Real time Action it should interact anywhere between 10-150 ping players interact with each other when in the same map and can see each other moving around the game world is persistent, and is saved when the server shuts down Right now the server just sends the player Only their information which is inside a Json Object Here is an example of what I am talking about If you notice there are 2 separate characters in 2 separate clients, but they are running on the same server. I am trying to get them to show up on both clients, but I don't know how I should accomplish this. Should I send it as an added value in the Json object? Also what is the name of this process so I can look it up and find more info on it?

    Read the article

  • Why does a proportional controller have a steady state error?

    - by Qantas 94 Heavy
    I've read about feedback loops, how much this steady state error is for a given gain and what to do to remove this steady state error (add integral and/or derivative gains to the controller), but I don't understand at all why this steady state error occurs in the first place. If I understand how a proportional control works correctly, the output is equal to the current output plus the error, multiplied by the proportional gain (Kp). However, wouldn't the error slowly diminish over time as it is added (reaching 0 at infinite time), not have a steady state error? From my confusion, it seems I'm completely misunderstanding how it works - a proper explanation of how this steady state error eventuates would be fantastic.

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • When do you use float and when do you use double

    - by Jakub Zaverka
    Frequently in my programming experience I need to make a decision whether I should use float or double for my real numbers. Sometimes I go for float, sometimes I go for double, but really this feels more subjective. If I would be confronted to defend my decision, I would probably not give sound reasons. When do you use float and when do you use double? Do you always use double, only when memory constraints are present you go for float? Or you use always float unless the precision requirement requires you to use double? Are there some substantial differences regarding computational complexity of basic arithemtics between float and double? What are the pros and cons of using float or double? And have you even used long double?

    Read the article

  • South African MVPs deserve their title.

    - by MarkPearl
    Recently I read a post by someone who felt the Microsoft MVP program had failed. My local experience with the MVP program would tend for me to disagree. On Saturday I attended a free Windows Phone 7 event organized by Robert MacLean and Rudi Grobler both of whom are local MVP’s. First of all, kudos to them for organizing the event which included a free lunch and flash stick and had some great content for a free event. Secondly, this is not the first time that either of these two MVP’s have organized events. They are active in the community, present at the majority of local events and are always approachable and give an “honest” opinion. For me, that is what an MVP stands for and at least in my region I feel that the MVP program is a real success.

    Read the article

  • Dual-boot computer won't boot without external hard drive

    - by FrankP
    I have Ubuntu loaded on my external HDD. I tried to unplug the external drive so that this way I could run Windows as the default OS to boot when the computer turns on, but it gives me an error. I need to know how I can make it so that when my computers boots it stops saying Error: no such device: (a whole bunch of numbers and letters) then it says grub rescue>_. If I plug the external HDD in, and I let Ubuntu run the boot process, then it gives me a list of OS's/ HDD's to choose from and Windows 7 is there. The only problem is that I want Windows be my default OS, not the other way around. P.S. I have found that I dislike Ubuntu because I can't even figure out how to install the necessary programs to learn how to start writing Ruby On Rails. So installing it was a waste of my time, in my opinion. Now that I have it on the external hard drive, I will leave it installed though. I just dont want to have to keep that external drive plugged in to my computer all the time. Thank you a ton to whoever can help me! Thank you for the detail'd instructions. I am doing my best to follow you and it makes sense when I read it but, Rescatux is not doing what you said it would. None of the options you said would appear are not there. On my screen there is 4 options when MBR run's none look familiar and when I picked the best possible option based on my educated guesses it said success. I tried to restart my computer and it said Please insert windows recovery disc and hit enter. Problem being I don't have the windows recovery disc. I bought my computer from a local Computer tec and he loads windows on it for you. I have no time to run my compute over to him as sunday is my only day free. I think that I just wrecked my computer in the process of this attempted fix windows refuses to boot now WITH or WITHOUT the HDD. Please help this is getting out of hand

    Read the article

  • removing an ssrs instance from a scale-out deployment

    - by Alex Bransky
    If you're like me you had at one time connected one of your Reporting Services instances to a report server database that was already in use by another instance.  This allows the instance to show up in the Scale-out Deployment section of the Reporting Services Configuration Manager.  My problem was that the server that got joined to the original server was no longer available as it had been repurposed, and when I clicked Remove Server to remove it from my scale-out it would fail because it couldn't contact the server.  After searching for a solution for quite some time I decided to look around in the report server database tables, and voila!  All I had to do was remove the old server from the Keys table.  I can't guarantee there won't be any side effects to this method, but it worked like a charm for me.

    Read the article

  • Steps to create a solution for a problem

    - by Mr_Green
    I am a trainee. According to my teacher, he says that to solve a problem we should go with steps to solve it like Create Algorithm (optional) Create a Datatable: By analyzing the problem, create main concepts in those problem as columns and the related issues in the main concept as rows. Create a Flowchart based on the Datatable. (when creating flow chart, think that you are in that situation and design it in your brain) By seeing the Flowchart, solve the problem. These steps should always consider by a programmer if he/she wants to become a Software designer (not programmer). Because the above approach gives an efficient way of finding solution to a problem even the problem is small. According to him, this way of approach also works in real time scenario's. My question is: Is this really an efficient way? please share also your thoughts. Keeping beside my question I just want to share some thoughts of my teacher with you who is a good mentor for me.

    Read the article

  • need example sql transaction procedures for sales tracking or financial database [closed]

    - by fa1c0n3r
    hi, i am making a database for an accounting/sales type system similar to a car sales database and would like to make some transactions for the following real world actions salesman creates new product shipped onto floor (itempk, car make, year, price).   salesman changes price.   salesman creates sale entry for product sold (salespk, itemforeignkey, price sold, salesman).   salesman cancels item for removed product.   salesman cancels sale for cancelled sale    the examples i have found online are too generic...like this is a transaction... i would like something resembling what i am trying to do to understand it.  anybody have some good similar or related sql examples i can look at to design these? do people use transactions for sales databases?  or if you have done this kind of sql transaction before could you make an outline for how these could be made?  thanks  my thread so far on stack overflow... http://stackoverflow.com/q/4975484/613799

    Read the article

  • GUI question : representing large tree

    - by Peter
    I have a tree-like datastructure of some six levels deep, that I would like to represent on a single webpage (can be tabs, trees; ....) In each level both childnodes and content are possible. Presenting it like a real tree would be not very usable (too big). I was thinking in the lines of hiding parts of the tree when you drill down and presenting a breadcrumbs or the like to keep you informed as to where you are... I guess my question boils down to : any ideas / examples ? Tx!

    Read the article

  • Changing Silverlight application themes at runtime

    We have received a lot of questions how can the application theme be changed at run time. The most important thing here to mark is that each time the application theme is changed all the controls should be re-drawn. Without going into too much detail, we could explain the application themes as a mechanism to replace the content of the Generic.xaml file in every loaded Telerik assembly at runtime. This does not affect the controls that already have default style applied, hence the need to create new instances. Because in the Silverlight applications the RootVisual cannot be changed at run time, we need a way to reset the application UI. The following code is in App.xaml.cs. private void Application_Startup(object sender, StartupEventArgs e)     {           // Before:           // this.RootVisual = new MainPage();            this.RootVisual = new Grid();         this.ResetRootVisual();     }        public void ResetRootVisual()     {         var rootVisual = Application.Current.RootVisual as Grid;         rootVisual.Children.Clear();         rootVisual.Children.Add(new MainPage());     }   In Application_Startup() instead of creating new MainPage UserControl instance as RootVisual, we create a new Grid panel, that will contain the MainPage UserControl. In the ResetRootVisual() method we create the instance of MainPage and add it to the RootVisual panel. Then we have to create a method in the code behind which will set StyleManager.ApplicationTheme and then will call the ResetRootVisual() method: private void ChangeApplicationTheme(Theme theme) {     StyleManager.ApplicationTheme = theme;     (Application.Current as App).ResetRootVisual(); }   Here you can find an example which illustrates the described implementation of a Silverlight theme. For more information please refer to Teleriks online demos for Silverlight, the demos for WPF and help documentation for WPF and help documentation for Silverlight. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 7-Eleven Mobile App Powered by Oracle SOA Suite

    - by Bruce Tierney
    When you slurp that Slurpee, do you ever think about the sub 100 millisecond processing of 20 million 7-Eleven digital transactions ever day supported by Oracle SOA Suite?  Maybe next time.  Check out this impressive video of Ronald Clanton, 7-Eleven's Digital Guest Experience Program Manager, describing how 7-Eleven provides a consistent view across all the end points of over 10,000 stores and their digital entities by using Oracle SOA Suite on Oracle Exalogic.  Managed by Oracle Enterprise Manager, they were able to provision their "Rapid-Fire" Middleware as a Service (MWaaS) in only "10 minutes" and deliver on time and complete testing ahead of schedule. So what are you waiting for?  Download your Slurpee App to get your free Pillsbury Cinnamon pastry and enjoy your contribution to the 20 million messages/day.   When your done, take picture of your tongue...red or blue?  Watch the video here:

    Read the article

  • Is SVN out of style?

    - by jitbit
    It's been only several years since I migrated from Visual Source Safe to SVN. And SVN for me is still kinda "WOW! I can do so many things! SVN is so cool!" But many people around me keep saying "SVN? Really? Meh..." And there's so many of them that I'm worried. Should I move my team to Git / Mercurial or some other fancy thing? I know I sound ridiculous and the obvious answer would be "stay with what works for YOU". SVN does work for me... But every time I create a new project in my repository I keep asking myself - may be this was the time to move? So... Is SVN really that bad? Do I miss a huge opportunity by sticking with it?

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • Render 3d object to 2d surface (embedded system)

    - by Martin Berger
    i am working on an embedded system of a sort, and in some free time i would like to test its drawing capabilities. System in question is ARM Cortex M3 microcontroller attached to EasyMX Stellaris board. And i have a small 320x240 TFT screen :) Now, i have some free time each day and i want to create rotating cube. Micro C PRO for ARM doesnt have 3d drawing capabilities, which means it must be done in software. From the book Introduction to 3D Game Programming with DirectX 10 i know matrix algebra for transformations but that is cool when you have DirectX to set camera right. I gues i could make 2d object to rotate, but how would i go with 3d one? Any ideas and examples are welcome. Although i would prefer advices. I'd like to understand this.

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • How can I schedule execution of a program?

    - by Bakhtiyor
    Let's say I have a small "Hello World" Java program compiled in my home directory. I can run it with java helloWorld from my home directory and it executes without any problem. Now I need to schedule to execute this program let's say after 10mins from now. So, I am executing following commands on console: at now+10min warning: commands will be executed using /bin/sh at> java helloWorld Press CTRL+D to finish So it is scheduled properly as I can see it with at -l command. But at this time nothing happens. Why? What is wrong with it? Because, if instead of scheduling the execution my own program I schedule executing of gedit command it opens it at a specified time. But with my own program it doesn't perform anything. How can I change the situation?

    Read the article

  • What steps to take in resolving/fixing/optimizing a long boot, with possible looping errors as the culprit

    - by Tchalvak
    So my boot time has been slowing and slowing as time has gone on... I am running a number of services (e.g. apache/mysql, postgresql), but it has seen a drastic slowing lately, while I've only been applying updates as normal. I happened to check out my /var/log/boot.log and it is spammed with many lines of this: init: upstart-udev-bridge main process (2738) terminated with status 1 init: upstart-udev-bridge main process ended, respawning I wasn't able to find any solutions to that issue in google, or much talk of it at all, and I'm not really certain that error is the problem, but it is the only lead that I have. What steps should I go through to diagnose boot problems/a slow bootup?

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >