Search Results

Search found 2272 results on 91 pages for 'fire dragon dol'.

Page 19/91 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to get scripted programs governing game entities run in parallel with a game loop?

    - by Jim
    I recently discovered Crobot which is (briefly) a game where each player codes a virtual robot in a pseudo-C language. Each robot is then put in an arena where it fights against other robots. A robots' source code has this shape : /* Beginning file robot.r */ main() { while (1) { /* Do whatever you want */ ... move(); ... fire(); } } /* End file robot.r */ You can see that : The code is totally independent from any library/include Some predefined functions are available (move, fire, etc…) The program has its own game loop, and consequently is not called every frame My question is: How to achieve a similar result using scripted languages in collaboration with a C/C++ main program ? I found a possible approach using Python, multi-threading and shared memory, although I am not sure yet that it is possible this way. TCP/IP seems a bit too complicated for this kind of application.

    Read the article

  • Creating Rectangle-based buttons with OnClick events

    - by Djentleman
    As the title implies, I want a Button class with an OnClick event handler. It should fire off connected events when it is clicked. This is as far as I've made it: public class Button { public event EventHandler OnClick; public Rectangle Rec { get; set; } public string Text { get; set; } public Button(Rectangle rec, string text) { this.Rec = rec; this.Text = text; } } I have no clue what I'm doing with regards to events. I know how to use them but creating them myself is another matter entirely. I've also made buttons without using events that work on a case-by-case basis. So basically, I want to be able to attach methods to the OnClick EventHandler that will fire when the Button is clicked (i.e., the mouse intersects Rec and the left mouse button is clicked).

    Read the article

  • How to Create and Use Templates in Outlook 2010

    - by Taylor Gibb
    If you reply to the emails with the same answer over and over, it will save you a lot of time to create a template that you can use over and over. We have previously show you how to create templates in Outlook 2003, so lets take a look at using Outlook 2010. When creating a template you get started as if you were creating a new email, that is choose new email from the Home tab. You can now draft your email as normal. HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Is it possible to give an animated GIF a transparent background?

    - by Phil
    I'm making a Fire Emblem-esque game. There are very cute 2D frames I made for each character, and, like a game like Fire Emblem, I want these characters to animate constantly. To circumvent the graphics programming involved I came up with a novel idea! I would make each character an animated gif, and only in special conditions ever halt their constant movement - in that case just change what image is being displayed. Simple enough. But I have a dilemma - I want the background of my .gifs to be transparent (so that the "grass" behind each character naturally shows, as per the screenshot - which has them as still images with transparent backgrounds). I know how to make a background transparent in numerous tools (GIMP, Photoshop). But it seems every .gif creator replaces the transparent background with something and I can't edit it back to transparent. Is it possible to have a .gif with a transparent "background"? Perhaps my knowledge of file formats is limiting me here.

    Read the article

  • Drawing flaming letters in 3D with OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What I want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to create a path along which I should add many particle emitters that will be emitting flame particles? I understand the concept for 2D, but for 3D are the particles always supposed to be facing the user? Something else I'm worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time, and each of these elements will have many particle emitters. More detailed explanation: I have a path of points, which is my model. Imagine a dotted letter "S" for example. I want make the "S" be on fire. The "S" is just an example it can be a circle, triangle, a line, pretty much any path described by my set of points. For achieving this fire effect I thought about using particles. So I am using a program called "Particle Designer" to create a fire style particle emitter. This emitter looks perfect on 2D on the iphone screen dimensions. So then I thought that I could probably draw an S or any other figure if i place many particle emitters next to each other following the path described. To move from the 2D version to the 3D version I thought about, scaling the emitter (with a scale matrix multiplication in its model matrix) and then moving it to a point in my 3D world. I did this and it works. So now I have 1 particle emitter in the 3D world. My question is, is this how you would achieve a flaming letter? Is this too inefficient if i expect to have many flaming paths on my world? Am i supposed to rotate the particle's quad so that its always looking at the user? (the last one is because i noticed that if u look at it from the side the particles start to flatten out)

    Read the article

  • Synchronized Property Changes (Part 4)

    - by Geertjan
    The next step is to activate the undo/redo functionality... for a Node. Something I've not seen done before. I.e., when the Node is renamed via F2 on the Node, the "Undo/Redo" buttons should start working. Here is the start of the solution, via this item in the mailing list and Timon Veenstra's BeanNode class, note especially the items in bold: public class ShipNode extends BeanNode implements PropertyChangeListener, UndoRedo.Provider { private final InstanceContent ic; private final ShipSaveCapability saveCookie; private UndoRedo.Manager manager; private String oldDisplayName; private String newDisplayName; private Ship ship; public ShipNode(Ship bean) throws IntrospectionException { this(bean, new InstanceContent()); } private ShipNode(Ship bean, InstanceContent ic) throws IntrospectionException { super(bean, Children.LEAF, new ProxyLookup(new AbstractLookup(ic), Lookups.singleton(bean))); this.ic = ic; setDisplayName(bean.getType()); setShortDescription(String.valueOf(bean.getYear())); saveCookie = new ShipSaveCapability(bean); bean.addPropertyChangeListener(WeakListeners.propertyChange(this, bean)); } @Override public Action[] getActions(boolean context) { List<? extends Action> shipActions = Utilities.actionsForPath("Actions/Ship"); return shipActions.toArray(new Action[shipActions.size()]); } protected void fire(boolean modified) { if (modified) { ic.add(saveCookie); } else { ic.remove(saveCookie); } } @Override public UndoRedo getUndoRedo() { manager = Lookup.getDefault().lookup( UndoRedo.Manager.class); return manager; } private class ShipSaveCapability implements SaveCookie { private final Ship bean; public ShipSaveCapability(Ship bean) { this.bean = bean; } @Override public void save() throws IOException { StatusDisplayer.getDefault().setStatusText("Saving..."); fire(false); } } @Override public boolean canRename() { return true; } @Override public void setName(String newDisplayName) { Ship c = getLookup().lookup(Ship.class); oldDisplayName = c.getType(); c.setType(newDisplayName); fireNameChange(oldDisplayName, newDisplayName); fire(true); fireUndoableEvent("type", ship, oldDisplayName, newDisplayName); } public void fireUndoableEvent(String property, Ship source, Object oldValue, Object newValue) { ReUndoableEdit reUndoableEdit = new ReUndoableEdit( property, source, oldValue, newValue); UndoableEditEvent undoableEditEvent = new UndoableEditEvent( this, reUndoableEdit); manager.undoableEditHappened(undoableEditEvent); } private class ReUndoableEdit extends AbstractUndoableEdit { private Object oldValue; private Object newValue; private Ship source; private String property; public ReUndoableEdit(String property, Ship source, Object oldValue, Object newValue) { super(); this.oldValue = oldValue; this.newValue = newValue; this.source = source; this.property = property; } @Override public void undo() throws CannotUndoException { setName(oldValue.toString()); } @Override public void redo() throws CannotRedoException { setName(newValue.toString()); } } @Override public String getDisplayName() { Ship c = getLookup().lookup(Ship.class); if (null != c.getType()) { return c.getType(); } return super.getDisplayName(); } @Override public String getShortDescription() { Ship c = getLookup().lookup(Ship.class); if (null != String.valueOf(c.getYear())) { return String.valueOf(c.getYear()); } return super.getShortDescription(); } @Override public void propertyChange(PropertyChangeEvent evt) { if (evt.getPropertyName().equals("type")) { String oldDisplayName = evt.getOldValue().toString(); String newDisplayName = evt.getNewValue().toString(); fireDisplayNameChange(oldDisplayName, newDisplayName); } else if (evt.getPropertyName().equals("year")) { String oldToolTip = evt.getOldValue().toString(); String newToolTip = evt.getNewValue().toString(); fireShortDescriptionChange(oldToolTip, newToolTip); } fire(true); } } Undo works when rename is done, but Redo never does, because Undo is constantly activated, since it is reactivated whenever there is a name change. And why must the UndoRedoManager be retrieved from the Lookup (it doesn't work otherwise)? Don't get that part of the code either. Help welcome!

    Read the article

  • Android design advice - services & broadcast receivers

    - by basudz
    I'm in the process of learning the Android SDK and creating some projects to get a grasp on the system. The current project I'm working with works just fine but I'd like to get some advice about other ways I can go about designing it. Here's what it needs to do. When a text message is received from a specific number, it should fire off a toast message that repeats at a certain interval for a specific duration. To make this work, I created an SMS BroadcastReceiver and checked the incoming messages for the number I'm looking for. If found, an IntentService would be started that would pull out the interval and duration from saved shared prefs. The IntentService would then fire off a broadcast. The BroadcastReceiver for this would catch it and use the AlarmManager to handle the toast message repetitions. This all works just fine, but I'm wondering if there's a cleaner or more efficient way of going about doing this? Any suggestions or advice?

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • ItemFocusIn Not Working on Non-Editable DataGrid in Flex

    - by Joshua
    I realize that ItemFocusIn is somehow only applicable to editable datagrids in flex, nevertheless I want to fire an event anytime the user selects a new row in a non-editable datagrid. I have successfully used the CLICK event, but this event is not fired when the user uses the keyboard to select a different row in the datagrid. What do I have to do to cause an event to fire whenever the currently highlighted row in the datagrid changes, regardless of weather it was changed by the mouse or by the keyboard?

    Read the article

  • IIS7 ISAPI Filter Module & HttpModule Events - How do they line up?

    - by MikeGurtzweiler
    So IIS7 in Integrated Pipeline mode uses a IsapiFilterModule to shim ISAPI filter DLL's and fire off the correct "events" on the filters, which is quite different than previous versions of IIS or IIS7 in classic mode because this means that HttpModules fire off right along side ISAPI filters in Integrated Pipeline mode. So does anyone happen to know how ISAPI events (http://msdn.microsoft.com/en-us/library/ms524855.aspx) and the HttpModule events (http://msdn.microsoft.com/en-us/library/ms998536.aspx) line up?

    Read the article

  • How to backup database to disk using JPA?

    - by Nitesh Panchal
    Hello, Which query to write in JPQL for backing up database on disk? If in JPQL it's not available even native sql query will do. Also, i would like to bring one issue in front of stackoverflow developers :- This site doesn't properly work in Opera (Opera 9.63). Whenever i write question and click "Post Your question" The button click event doesn't fire at all, may be, the server side event doesn't fire or something. However, no such problem comes in IE and firefox.

    Read the article

  • Javascript ContextMenu on a TD

    - by Dave
    If I attach a context menu to a td, it fires okay for text in the TD, but if I add a div to the TD, the context menu will not fire when right clicking on the div. How can I make the context menu fire when anything, data or divs, are right clicked in the td?

    Read the article

  • JBoss Seam - order event listeners

    - by Walter White
    Hi all, I would like to order my event listeners. Is it possible to do this in JBoss Seam 2.x? I am thinking as a workaround, which is quite simple, I will just daisy chain my events: fire event A. do something on event A. a. fire event B do something on event B. Any comments with this design? Is this a good / bad practice? Thanks, Walter

    Read the article

  • issues with live function

    - by Do Good
    I am using the .live function to fire of a function aaa(). Unable to fire the function because code does not reach alert msg The structure of my html is body id="plants" form id= flower method="post" div class= "rose" div class= "red" ul id = "colors" li a li a li a Cuurently I am using $( 'body#plants form#flower div.rose div.red ul#colors li a' ).live('click', function(){ alert('code reaches'); aaa(); }); How can I get this to work?

    Read the article

  • Silverlight Two Way Data Binding on Key Up

    - by kouPhax
    Is there a way to fire a two way data bind when the key up event fires in Silverlight. Currently I have to lose focus on the textbox to get the binding to fire. <TextBox x:Name="Filter" KeyUp="Filter_KeyUp" Text="{Binding Path=Filter, Mode=TwoWay }"/>

    Read the article

  • asp.net Multiple Page_Load events for a user control when using URL Routing

    - by Paul Hutson
    Hello, I've recently set up an ASP.net site (not using MVC.net) to use URL Routing (more on the code below) - when using user controls on the site (i.e I've created a "menu" user control to hold menu information) the page_load event for that control will fire twice when URLs have more than one variable passed over. i.e. pageName/VAR1 : will only fire the page_load event once. while pageName/VAR1/VAR2 : will fire the page_load event twice. *Multiple extra VARs added on the end will still only fire the page_load event twice*. Below are the code snippits from the files, the first is the MapPageRoute, located in the Global.asax : // Register a route for the Example page, with the NodeID and also the Test123 variables allowed. // This demonstrates how to have several items linked with the page routes. routes.MapPageRoute( "Multiple Data Example", // Route name "Example/{NodeID}/{test123}/{variable}", // Route URL - note the NodeID bit "~/Example.aspx", // Web page to handle route true, // Check for physical access new System.Web.Routing.RouteValueDictionary { { "NodeID", "1" }, // Default Node ID { "test123", "1" }, // Default addtional variable value { "variable", "hello"} // Default test variable value } ); Next is the way I've directed to the page in the menu item, this is a list item within a UL tag : <li class="TopMenu_ListItem"><a href="<%= Page.GetRouteUrl("Multiple Data Example", new System.Web.Routing.RouteValueDictionary { { "NodeID", "4855" }, { "test123", "2" } }) %>">Example 2</a></li> And finally the control that gets hit multiple times on a page load : // For use when the page loads. protected void Page_Load(object sender, EventArgs e) { // Handle the routing variables. // this handles the route data value for NodeID - if the page was reached using URL Routing. if (Page.RouteData.Values["NodeID"] != null) { nodeID = Page.RouteData.Values["NodeID"] as string; }; // this handles the route data value for Test123 - if the page was reached using URL Routing. if (Page.RouteData.Values["Test123"] != null) { ExampleOutput2.Text = "I am the output of the third variable : " + Page.RouteData.Values["Test123"] as string; }; // this handles the route data value for variable - if the page was reached using URL Routing. if (Page.RouteData.Values["variable"] != null) { ExampleOutput3.Text = "I say " + Page.RouteData.Values["variable"] as string; }; } Note, that when I'm just hitting the page and it uses the default values for items, the reloads do not happen. Any help or guidance that anyone can offer would be very much appreciated! EDIT : The User Control is only added to the page once. I've tested the load sequence by putting a breakpoint in the page_load event - it only hits twice when the extra routes are added. Thanks in Advance, Paul Hutson

    Read the article

  • jQuery selector not selecting

    - by Paul Nathan
    I am unable to get this event to fire: $("#about").click(function() { //I have put alert("foo") here, won't fire $("#about_stuff").toggle(); }); snip <li ><a href="#a" id="about">About</a> I've tested the toggle line in Firebug and it successfully works - I am at my wits end, I've checked it against multiple examples and it persistently refuses to work.

    Read the article

  • onchange of FilteringSelect do not work

    - by vusan
    Same code I'm using of dojo documentation just downloaded code work fine but onchange event do not fire on my project. I have make it to work by firing onblur onBlur: function(){alert(3)} What thing may cause onchange event not fire? var filteringSelect = new FilteringSelect({ id: "stateSelect", name: "state", value: "CA", store: stateStore, searchAttr: "name", onChange: function(){alert(3)} }, "stateSelect");

    Read the article

  • C# BackgroundWorker RunWorkerCompleted Event

    - by Jim Fell
    My C# application has several background workers. Sometimes one background worker will fire off another. When the first background worker completes and the RunWorkerCompleted event is fired, on which thread will that event fire, the UI or the first background worker from which RunWorkerAsync was called? I am using Microsoft Visual C# 2008 Express Edition. Any thoughts or suggestions you may have would be appreciated. Thanks.

    Read the article

  • How do I avoid multiple key up/down/press events when holding a key?

    - by Rammay
    I'm creating a web front end to control a small robot. Ajax calls will be made on a keydown, to start the robot, and keyup to stop it. My problem is that when a key is held down the keyup, keydown, and keypress events seem to cycle continually. Does anybody know of a way to only have keydown fire when the key is first pressed and keyup to fire when it has been released?

    Read the article

  • How to handle touch events on UI Controls

    - by Sreelatha
    Hi, I have a query related to Touch events on the UI controls. I have 4 controls on the screen ( UITextField, UISlider, UISwitch, UIButton ). If the user touches any of the control on the screen, I want to fire touchesBegan and touchesEnded events on those controls in which I would implement some code. Please let me know how to fire these events. Thanks in advance, Sreelatha.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >