Search Results

Search found 31414 results on 1257 pages for 'information bar'.

Page 81/1257 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • AngularJS service returning promise unit test gives error No more request expected

    - by softweave
    I want to test a service (Bar) that invokes another service (Foo) and returns a promise. The test is currently failing with this error: Error: Unexpected request: GET foo.json No more request expected Here are the service definitions: // Foo service returns new objects having get function returning a promise angular.module('foo', []). factory('Foo', ['$http', function ($http) { function FooFactory(config) { var Foo = function (config) { angular.extend(this, config); }; Foo.prototype = { get: function (url, params, successFn, errorFn) { successFn = successFn || function (response) {}; errorFn = errorFn || function (response) {}; return $http.get(url, {}).then(successFn, errorFn); } }; return new Foo(config); }; return FooFactory; }]); // Bar service uses Foo service angular.module('bar', ['foo']). factory('Bar', ['Foo', function (Foo) { var foo = Foo(); return { getCurrentTime: function () { return foo.get('foo.json', {}, function (response) { return Date.parse(response.data.now); }); } }; }]); Here is my current test: 'use strict'; describe('bar tests', function () { var currentTime, currentTimeInMs, $q, $rootScope, mockFoo, mockFooFactory, Foo, Bar, now; currentTime = "March 26, 2014 13:10 UTC"; currentTimeInMs = Date.parse(currentTime); beforeEach(function () { // stub out enough of Foo to satisfy Bar service: // create mock object with function get: function(url, params, successFn, errorFn) // that promises to return a response with this property // { data: { now: "March 26, 2014 13:10 UTC" }}) mockFoo = { get: function (url, params, successFn, errorFn) { successFn = successFn || function (response) {}; errorFn = errorFn || function (response) {}; // setup deferred promise var deferred = $q.defer(); deferred.resolve({data: { now: currentTime }}); return (deferred.promise).then(successFn, errorFn); } }; // create mock Foo service mockFooFactory = function(config) { return mockFoo; }; module(function ($provide) { $provide.value('Foo', mockFooFactory); }); module('bar'); inject(function (_$q_, _$rootScope_, _Foo_, _Bar_) { $q = _$q_; $rootScope = _$rootScope_; Foo = _Foo_; Bar = _Bar_; }); }); it('getCurrentTime should return currentTimeInMs', function () { Bar.getCurrentTime().then(function (serverCurrentTime) { now = serverCurrentTime; }); $rootScope.$apply(); // resolve Bar promise expect(now).toEqual(currentTimeInMs); }); }); The error is being thrown at $rootScope.$apply(). I also tried using $rootScope.$digest(), but it gives the same error. Thanks in advance for any insight you can give me.

    Read the article

  • How to get Firefox to not continue to show "Transferring data from..." in browser status bar after a

    - by Edward Tanguay
    The following silverlight demo loads and displays a text file from a server. However, in Firefox (but not Explorer or Chrome) after you click the button and the text displays, the status bar continues to show "Transferring data from test.development..." which erroneously gives the user the belief that something is still loading. I've noticed that if you click on another Firefox tab and then back on the original one, the message goes away, so it seems to be just a Firefox bug that doesn't clear the status bar automatically. Is there a way to clear the status bar automatically in Firefox? or a way to explicitly tell it that the async loading is finished so it can clear the status bar itself? XAML: <UserControl x:Class="TestLoad1111.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <StackPanel Margin="10"> <StackPanel HorizontalAlignment="Left" Orientation="Horizontal" Margin="0 0 0 10"> <Button Content="Load Text" Click="Button_LoadText_Click"/> </StackPanel> <TextBlock Text="{Binding Message}" /> </StackPanel> </UserControl> Code Behind: using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.ComponentModel; namespace TestLoad1111 { public partial class MainPage : UserControl, INotifyPropertyChanged { #region ViewModelProperty: Message private string _message; public string Message { get { return _message; } set { _message = value; OnPropertyChanged("Message"); } } #endregion public MainPage() { InitializeComponent(); DataContext = this; } private void Button_LoadText_Click(object sender, RoutedEventArgs e) { WebClient webClientTextLoader = new WebClient(); webClientTextLoader.DownloadStringAsync(new Uri("http://test.development:111/testdata/test.txt?" + Helpers.GetRandomizedSuffixToPreventCaching())); webClientTextLoader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClientTextLoader_DownloadStringCompleted); } void webClientTextLoader_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { Message = e.Result; } #region INotifiedProperty Block public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } public static class Helpers { private static Random random = new Random(); public static int GetRandomizedSuffixToPreventCaching() { return random.Next(10000, 99999); } } }

    Read the article

  • How do I make faint words in search bar that go away when you click/type?

    - by Razor Storm
    So, for instance facebook's search bar has faint word that says "search", but when you click on the bar, it becomes blank and you may begin typing, when you click away the "search" goes back. Similarly, SO's ask a question title box has faint words that go away when you start typing. I'm not too sure what this effect is called, but I'm wondering if theres a jQuery plugin that helps to achieve this. This isn't particularly difficult to program, but I thought why reinvent the wheel if someone already made a plugin for it.

    Read the article

  • Tab Bar and Nav Controller: Where did I go wrong in my Interface Builder wiring?

    - by editor
    Even if you don't know how I've shot myself in the foot, a story which I've tried to lay out below, if you think I've done a good job showing the parameters of my problem I'd appreciate an upvote so that I might be able to grab some attention for my question. I've been working on an iPhone application in XCode and Interface Builder of the Tab Bar project type. After getting a table view of topics (business sectors) working fine I realized that I would need to add a Navigation Control to allow the user to drill into a subtopics (subsectors) table. As a green Objective-C developer, that was confusing, but I managed to get it working by reading various documentation trying out a few different IB options. My current setup is a Tab Bar Controller with Tab 1 as a Navigation Controller and Tab 2 a plain view with a Table View placed into it. The wiring works: I can log when table rows are selected and I'm ready to push a new View Controller onto the stack so that I can display the subtopics Table View. My problem: For some reason the first tab's Table View is a delegate and dataSource of the second ta. It doesn't make sense to me and I can't figure out why that's the only setup that works. Here is the wiring: Navigation Controller (Sectors) is a delegate of Tab Bar Navigation Bar is a delegate of Navigation Controller (Sectors) View Contoller (Sectors) has a view of Table View Table View (in Navigation Controller (Sectors)) is a delegate of First View Controller (Companies) Table View (in Navigation Controller (Sectors)) is a dataSource outlet of First View Controller (Companies) First View Controller (Companies) First View Contoller (Sectors) has a view of Table View Table View (in First View Controller (Companies)) is not hooked up to a dataSource outlet and is not a delegate When I click the tab buttons and look at the Inspector I see that the first tab is correctly hooked up to my MainWindow.xib and the second tab has selected a nib called SecondView.xib. It's in the File's Owner of MainWindow.xib where I inherit UITableViewDataSource and UITableViewDelegate (and also UITabBarControllerDelegate) in the .h, and in the .m where I implement the delegate methods. Why does this setup only work when the Table View in my first tab (View Controller (Sectors)) is a delegate and dataSource of the second tab? I'm confused: why wouldn't it need to be hooked up to the Navigation Controller-enabled tab in which the Table View is seen (Navigation Controller (Sectors))? The Table View seen on the second tab has neither dataSource and is not a delegate. I'm having trouble getting a pushViewController to fire (self.navigationController is not nil but the new View Controller still doesn't load) and I suspect that I need to work out this IB wiring issue before I can troubleshoot why the Nav Controller won't push a new View Controller onto the stack. if(nil == self.navigationController) { NSLog(@"self.navigationController is nil."); } else { NSLog(@"self.navigationController is not nil."); SectorList *subsectorViewController = [[SectorList alloc] initWithNibName:@"SectorList" bundle:nil]; subsectorViewController.title = @"Subsectors"; [[self navigationController] pushViewController:subsectorViewController animated:YES]; [subsectorViewController release]; }

    Read the article

  • Is a tab bar configuration view can be customized ?

    - by Dirty Henry
    I have an application with 8 tabbar items in the tabbar controller. Is there a way I can customize the layout of the "... (more)" view in which you can configure which tab bar items should appear in the main tab bar. It seems to be a table view controller but i'd like to use custom cell views and a background image.

    Read the article

  • Does waitpid yield valid status information for a child process that has already exited?

    - by dtrebbien
    If I fork a child process, and the child process exits before the parent even calls waitpid, then is the exit status information that is set by waitpid still valid? If so, when does it become not valid; i.e., how do I ensure that I can call waitpid on the child pid and continue to get valid exit status information after an arbitrary amount of time, and how do I "clean up" (tell the OS that I am no longer interested in the exit status information for the finished child process)? I was playing around with the following code, and it appears that the exit status information is valid for at least a few seconds after the child finishes, but I do not know for how long or how to inform the OS that I won't be calling waitpid again: #include <assert.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> int main() { pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "Failed to fork\n"); return EXIT_FAILURE; } else if (pid == 0) { // code for child process _exit(17); } else { // code for parent sleep(3); int status; waitpid(pid, &status, 0); waitpid(pid, &status, 0); // call `waitpid` again just to see if the first call had an effect assert(WIFEXITED(status)); assert(WEXITSTATUS(status) == 17); } return EXIT_SUCCESS; }

    Read the article

  • How to use javascript to get information from the content of another page (same domain)?

    - by hlovdal
    Let's say I have a web page (/index.html) that contains the following <li> <div>item1</div> <a href="/details/item1.html">details</a> </li> and I would like to have some javascript on /index.html to load that /details/item1.html page and extract some information from that page. The page /details/item1.html might contain things like <div id="some_id"> <a href="/images/item1_picture.png">picture</a> <a href="/images/item1_map.png">map</a> </div> My task is to write a greasemonkey script, so changing anything serverside is not an option. To summarize, javascript is running on /index.html and I would like to have the javascript code to add some information on /index.html extracted from both /index.html and /details/item1.html. My question is how to fetch information from /details/item1.html. I currently have written code to extract the link (e.g. /details/item1.html) and pass this on to a method that should extract the wanted information (at first just .innerHTML from the some_id div is ok, I can process futher later). The following is my current attempt, but it does not work. Any suggestions? function get_information(link) { var obj = document.createElement('object'); obj.data = link; document.getElementsByTagName('body')[0].appendChild(obj) var some_id = document.getElementById('some_id'); if (! some_id) { alert("some_id == NULL"); return ""; } return some_id.innerHTML; }

    Read the article

  • Hibernate Persistence problems with Bean Mapping (Dozer)

    - by BuffaloBuffalo
    I am using Hibernate 3, and having a particular issue when persisting a new Entity which has an association with an existing detached entity. Easiest way to explain this is via code samples. I have two entities, FooEntity and BarEntity, of which a BarEntity can be associated with many FooEntity: @Entity public class FooEntity implements Foo{ @Id private Long id; @ManyToOne(targetEntity = BarEntity.class) @JoinColumn(name = "bar_id", referencedColumnName = "id") @Cascade(value={CascadeType.ALL}) private Bar bar; } @Entity public class BarEntity implements Bar{ @Id private Long id; @OneToMany(mappedBy = "bar", targetEntity = FooEntity.class) private Set<Foo> foos; } Foo and Bar are interfaces that loosely define getters for the various fields. There are corresponding FooImpl and BarImpl classes that are essentially just the entity objects without the annotations. What I am trying to do is construct a new instance of FooImpl, and persist it after setting a number of fields. The new Foo instance will have its 'bar' member set to an existing Bar (runtime being a BarEntity) from the database (retrieved via session.get(..)). After the FooImpl has all of its properties set, Apache Dozer is used to map between the 'domain' object FooImpl and the Entity FooEntity. What Dozer is doing in the background is instantiating a new FooEntity and setting all of the matching fields. BarEntity is cloned as well via instantiation and set the FooEntity's 'bar' member. After this occurs, passing the new FooEntity object to persist. This throws the exception: org.hibernate.PersistentObjectException: detached entity passed to persist: com.company.entity.BarEntity Below is in code the steps that are occurring FooImpl foo = new FooImpl(); //returns at runtime a persistent BarEntity through session.get() Bar bar = BarService.getBar(1L); foo.setBar(bar); ... //This constructs a new instance of FooEntity, with a member 'bar' which itself is a new instance that is detached) FooEntity entityToPersist = dozerMapper.map(foo, FooEntity.class); ... session.persist(entityToPersist); I have been able to resolve this issue by either removing or changing the @Cascade annotation, but that limits future use for say adding a new Foo with a new Bar attached to it already. Is there some solution here I am missing? I would be surprised if this issue hasn't been solved somewhere before, either by altering how Dozer Maps the children of Foo or how Hibernate reacts to a detached Child Entity.

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Firefox all but freezes during large file upload; Ajax progress bar infeasible; IE6 works fine

    - by Sean
    I want to provide a progress bar for my users who upload very large files. I did some reading and implemented what should be a pretty straightforward solution: I have a <form> element that contains an file input element; its target is set to the ID of a hidden iframe. On the server side, there's some Spring magic that attaches an object to the user's session; the progress of the upload can be queried from this object. After submitting the form, I start a repeating Ajax call using setInterval that queries the server for the percent-complete using the aforementioned session object. The call repeats every half-second, skipping the Ajax call if the previous call has not yet completed. I use the data from the call to update the width of an onscreen element. When the server call reports that the upload is complete, I clear the interval timer. I created a 100-megabyte file and uploaded it using my interface. This is using Firefox 3.6.3. What I found is that although the upload takes 20-25 seconds, the progress bar doesn't get updated until the very end. Moreover, the entire browser is basically frozen until the upload completes. I assumed that my method must be flawed, but I tried the same page using IE6, and was utterly amazed when it behaved as I had designed it to--the progress bar got updated every half second, and the whole upload only took about 15 seconds, much faster than Firefox. I don't have many add-ons installed, but I tried disabling Firebug and restarting my browser. This marginally improved the performance--I got perhaps a single additional progress bar update mid-upload--but still far from acceptable. Can anyone tell me what I can do to bring Firefox's performance up to the level of IE6? Ugh, I can't believe I actually typed that. EDIT: I just tried uploading a large file from a Firefox 3.6.3 browser on a different machine than the one that's running my web server, and it worked fine. Huh.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >