Search Results

Search found 12193 results on 488 pages for 'odi technical feature overviews'.

Page 69/488 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Explaining training method for AdaBoost algorithm

    - by konzti8
    Hi, I'm trying to understand the Haar feature method used for the training step in the AdaBoost algorithm. I don't understand the math that well so I'd appreciate more of a conceptual answer (as much as possible, anyway). Basically, what does it do? How do you choose positive and negative sets for what you want to select? Can it be generalized? What I mean by that is, can you choose it to find any kind of feature that you want no matter what the background is? So, for example, if I want to find some kind of circular blob - can I do that? I've also read that it is used on small patches for the images around the possible feature - does that mean you have to manually select that image patch or can it be automated to process the entire image? Is there matlab code for the training step? Thanks for any help...

    Read the article

  • Using ScriptCombining through a ScriptManager on a Master Page

    - by Hmobius
    ASP.NET 3.5 SP1 adds a great new ScriptCombining feature to the ScriptManager object as demonstrated on this video. However he only demonstrates how to use the feature with the ScriptManager on the same page. I'd like to use this feature on a site where the scriptmanager is on the master page but can't figure out how to add the scripts I need for each page programmatically to the manager. I've found this post to use as a starting point, but I'm not really getting very far. can anyone give me a helping hand? Thanks, Dan

    Read the article

  • WCF on Win Server 2008 and IIS7 with only net.tcp binding hide IIS features

    - by Nicola Celiento
    Hi all, I've installed HTTP Activation and Non-HTTP Activation IIS's features for Framework.NET 3.0 under WCF Activation feature. I'm trying to remove http and https bindings (under default Web Site) from IIS Manager and leave others (net.tcp, net.msmq, etc.) but if I close and re-open IIS manager I not found any icons in the right panel (Feature View). The only feature I see is IIS Manager Permissions. It's right I don't see them? I hope you can help me. Thank you in advance!

    Read the article

  • ASP.NET - Manual authentication system

    - by Gal V
    Hello all, Wer'e developing an ASP.NET C# application, which will contain an authentication system that authenticates users in multiple levels (user, admin, super-admin, etc.). Our idea is NOT to use the built in ASP.NET forms authentication feature. Our plan is to create a whole 'new' system for it- based on the Session object, and SQL database contains users' info such as username & password. Is there any SERIOUS different between our idea to the Forms authentication feature? What security risks do we take? How do we solve them? Is this a good alternative for the forms authentication feature? Thanks in advance !

    Read the article

  • Integer variables at WIX

    - by Hila
    I would like to install a feature according to the brand. So in my brand.wxi I defined: <?define brand.FeatureLevel = 1 ?> And in my wxs I wrote: <Feature Id="FF" Title="FF" Level="$(var.brand.FeatureLevel)"> <ComponentRef Id="..." /> <ComponentRef Id="..." /> </Feature> This definition works fine (wheather I've placed 0 or 1 as FeatureLevel). My only problem is a warning I get at compilation time: The 'Level' attribute is invalid - The value '$(var.brand.FeatureLevel)' is invalid according to its datatype 'http://www.w3.org/2001/XMLSchema:integer' - The string '$(var.brand.FeatureLevel)' is not a valid Integer value. Is there a way to fix this warning? Can I define integer variable? I couldn't find a way...

    Read the article

  • Defining - and dealing with - Evil

    - by Chris Becke
    As a software developer one sometimes gets feature requests that seem to be in some kind of morally grey area. Sometimes one can deflect them, or implement them in a way that feels less 'evil' - sometimes - on reflection - while the feature request 'feels' wrong theres no identifiable part of it that actually causes harm. Sometimes one feels a feature is totally innocent but various anti virus products start tagging one as malware. For example - I personally consider EULAs to (a) hopefully be unenforceable and (b) a means by which rights are REMOVED from consumers. However Anti Virus scanners frequently mark as malware any kind of download agent that does not display a EULA. Which to me is the result of a curious kind of double think. What I want to know is - are there any online (or offline) resources that cover evil software development practices? How can I know if a software practice that I consider dodgy is in fact evil enough to consider fighting?

    Read the article

  • amazon design doubt

    - by praveen
    I was looking at the amazon website and was wondering how one of the feature would have been implemented. The feature : what customers buy after viewing a particular item. If i were to develop such a feature i would probably generate a session id for each user session and store the session id-page id combination in a log file. and if a book is bought set a separate flag for the session id-page id. A separate program can then be run on the log file periodically, to identify the groups that were bought together/viewed together and that information can be stored in a persistent file. This is ofcourse a simple solution without taking into consideration the distributed nature of the servers - but would this suffice or can you help me identify a better design.

    Read the article

  • Move a sequential set of commits from one (local) branch to another

    - by jpswain09
    Is there a way to move a sequential set of commits from one (local) branch to another? I have searched quite a bit and still haven't found a clear answer for what I want to do. For example, say I have this: master A---B---C \ feature-1 M---N---O---P---R---Q And I have decided that the last 3 commits would be better off like this: master A---B---C \ feature-1 M---N---O \ f1-crazy-idea P---R---Q I know I can do this, and it does work: $ git log --graph --pretty=oneline (copying down sha-1 ID's of P, R, Q) $ git checkout feature-1 $ git reset --hard HEAD^^^ $ git checkout -b f1-crazy-idea $ git cherry-pick <P sha1> $ git cherry-pick <R sha1> $ git cherry-pick <Q sha1> I was hoping that instead there would be a more concise way to do this, possibly with git rebase, although I haven't had much luck. Any insight would be greatly appreciated. Thanks, Jamie

    Read the article

  • how to create a new variant in bjam

    - by steve jaffe
    I've tried reading the documentation but it is rather impenetrable so I'm hoping someone may have a simple answer. I want to define a new 'variant', based on 'debug', which just adds some macro definitions to the compiler command line, eg "-DSOMEMACRO". I think I may be able to do this as a "sub-variant" of debug, or else just define a new variant copying 'debug', but I'm not even sure where to do this. It looks like feature.jam in $BOOST_BUILD_DIR/build may be the place. Perhaps what I really want is simply a new 'feature' but it's still not clear to me exactly what I need to do and where, and I don't know if a 'feature' allows me to direct the build products to a different directory to the 'debug' build. Any suggestions will be appreciated. (In case you're wondering, I have to use bjam since it has been adopted as our corporate standard.)

    Read the article

  • Exception while trying to deserialize JSON into EntityFramework using JavaScriptSerializer

    - by Barak
    I'm trying to deserialize JSON which I'm getting from an external source into an Entity Framework entity class using the following code: var serializer = new JavaScriptSerializer(); IList<Feature> obj = serializer.Deserialize<IList<Feature>>(json); The following exception is thrown: Object of type 'System.Collections.Generic.List1[JustTime.Task]' cannot be converted to type 'System.Data.Objects.DataClasses.EntityCollection1[JustTime.Task]'. My model is simple: The Feature class has a one-to-many relation to the Tasks class. The problem appears to be the deserializer is trying to create a generic List to hold the collection of tasks instead of an EntityCollection. I've tried implementing a JavaScriptConverted which would handle System.Collections.Generic.List but it didn't get called by the deserializer.

    Read the article

  • Android Paypal One time Payment

    - by Ameya
    Hi All, Does any one know, or is it possible to do one time payment, with paypal in android ? Consider the senario. Buyer purchases installs free application. Clicks on paypal module and makes in-app payment, purchases feature successfully. Buyer deletes application. All information including the feature purchase enty in database is deleted. Buyer reinstalls the application. Here is the catch, if he wants to use the feature he will have to do a re-purchase. Which I wann to avoid in my application. This is taken care in iPhoen in-app purchase. That is if a buyer has already purchased an feture or in-app item, and if he trys to repurchase it (item has been set to one time payment) , the in-app purchase transaction will succeed without the user actually having to repay for the item. Is there an solution for this can anyone help.

    Read the article

  • Android Developers, Are you adding APP 2 SD in a future app release and if so for which applications

    - by Anthony
    For Android application developers regarding 2.2 and the new App 2 SD feature. Android 2.2 now allows you to have your applications installed onto the SD card instead of using the phones internal memory. Will any of you be adding this feature onto your next release and if so what is your app? I know applications built with the App 2 SD function cannot be used when the SD card is mounted. Maybe 2 versions of each app on the market would work out great for those that would need an app while the phone is mounted. What do you think about this idea? Are you aware of any other negative issues that arise from an application built for this feature?

    Read the article

  • MATLAB: draw centroids

    - by Myx
    Hello - my main question is given a feature centroid, how can I draw it in MATLAB? In more detail, I have an NxNx3 image (an rgb image) of which I take 4x4 blocks and compute a 6-dimensional feature vector for each block. I store these feature vectors in an Mx6 matrix on which I run kmeans function and obtain the centroids in a kx6 matrix, where k is the number of clusters and 6 is the number of features for each block. How can I draw these center clusters in my image in order to visualize if the algorithm is performing the way I wish it to perform? Or if anyone has any other way/suggestions on how I can visualize the centroids on my image, I'd greatly appreciate it. Thank you.

    Read the article

  • Which JMS broker implementations allowing resending messages saved in dead message queue?

    - by marabol
    I wonder, if there JMS broker, that allows administrators to resend (via GUI or any tool) messages, saved in a ded message queue or dead letter queue, after solving the causing problem (e.g. database is down, not enough space...). WebSphere provide a feature to resend messages saved in dead letter queue: 1 Glassfish 2.1.1 using Sun Java System Message Queue 4.4 has no feature to do this, I think so. What are the options on other JMS brokers? Or is the best way, not to use the DMQ/DLQ feature, if you are depend on a message? Thanks a lot

    Read the article

  • How to avoid automatic renaming of sub signature parameters in visual basic 6.

    - by systempuntoout
    In Visual basic 6, i declare a sub like this: Private Sub test1(ByRef XmlFooOutput As String) ... End Sub after that, i declare another sub like the following one: Private Sub test2(ByRef xmlFooOutput As String) ... End Sub automagically, the first method is transformed in: Private Sub test1(ByVal xmlFooOutput As String) ... End Sub so the XmlFooOutput parameter is transformed in xmlFooOutput. This is a pretty dangerous feature because, method like those could be mapped to different XSL presentation files that read XML values through Xpath. So when test1 parameter is renamed, XSL bound to test1 method goes broken because Xpath point to XmlFooOuput but the correct value is now in xmlFooOutput. Is it possible to remove this weird feature? I'm using microsoft visual basic 6.0 (SP6). This question has some duplicate: http://stackoverflow.com/questions/1064858/stop-visual-basic-6-from-changing-my-casing http://stackoverflow.com/questions/248760/vb6-editor-changing-case-of-variable-names from what i see, there's no practical solution to disable this Intellisense evil feature.

    Read the article

  • dynamic searchable fields, best practice?

    - by boblu
    I have a Lexicon model, and I want user to be able to create dynamic feature to every lexicon. And I have a complicate search interface that let user search on every single feature (including the dynamic ones) belonged to Lexicon model. I could have used a serialized text field to save all the dynamic information if they are not for searching. In case I want to let user search on all fields, I have created a DynamicField Model to hold all dynamically created features. But imagine I have 1,000,000,000 lexicon, and if one create a dynamic feature for every lexicon, this will result creating 1,000,000,000 rows in DynamicField model. So the sql search function will become quite inefficient while a lot of dynamic features created. Is there a better solution for this situation? Which way should I take? searching for a better db design for dynamic fields try to tuning mysql(add cache fields, add index ...) with current db design

    Read the article

  • Code reviews for larger ASP.NET MVC team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • Git workflow idea to push an unfinished local branch to remote for backup purposes

    - by Zubin
    Say I'm currently working on a new feature which I've branched off of the 'dev' branch and I've been working for several days and it's not yet ready to be merged with 'dev' and pushed. Although I have made several commits and have been pulling changes to dev and then merging dev into my feature branch to keep myself updated. Here's my question. Is it a good idea to push my feature branch to a new branch (with the same name as my local branch) onto origin (say GitHub) just for back-up purposes and later on when it's merged into 'dev' and/or 'master' delete it from origin.

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • SVN tool to rebase a branch in git style

    - by timmow
    Are there any tools available that will let me rebase in git style an SVN branch onto a new parent? So, in the following situation, I create a feature branch, and there are commits to the trunk E---F---G Feature / A---B---C---D--H--I trunk I'm looking for a tool which copies the trunk, and applies the commits one by one, letting me resolve any conflicts if any exist - but each commit retains the same commit message, and is still a separate commit. E'---F'---G' Feature / A---B---C---D--H--I trunk So commit E' will be a commit with the same changes as E, except in the case of E causing a conflict, in which case E' will differ from E in that E' has the conflicts resolved, and the same commit message as E. I'm looking for this as it helps in keeping branches up to date with trunk - the svnmerge.py / mergeinfo way does not help, as you still need to resolve your changes when you merge back to trunk.

    Read the article

  • privacy, c++, firefox... big bug!!!

    - by Delirium tremens
    How to reproduce: open Firefox visit a good TGP click History click Show All History select the name of the good TGP you already know Delete This Page, but there is an other feature, a super secret feature, click Forget All About This Page --- if you had cookies, cache, active logins etc that came from the good TGP, it's correctly deleted, because it's a different feature from delete this page visit TWO good TGPs click History click Show All History select the names of the TWO good TGPs --- where is Forget All About These Pages??? That is the bug... It used to be all-or-nothing, but now... now??? oh, now there's a bug and it's still all-or-nothing.

    Read the article

  • Getting number of days between [NSDate date] and string @"2010-11-12"

    - by grobald
    Hello i have a question. in short: i need a function which receives [NSDate date] and string @"2010-11-12" and returns the amount of days between those two dates. more explanation: I need to store a date from a server in the format @"2010-11-12" in my NSUserdefaults. The meaning of this date is the expireDate of a feature in an iPhone App. Everytime i press on a button for this feature i need to check if the difference in days between the current time-[NSDate date] and @"2010-11-12" is greater than 0. That means tahat the feature is disabled. its making me crazy mabey its to dead simple.

    Read the article

  • Reorganising git commits into different branches

    - by user1425706
    I am trying to reorganise my git tree so that it is structured a bit better. Basically at the moment I have a single master branch with a couple of small feature branches that split from it. I want to go back and reorder it so that the only commits in the main branch are the ones corresponding to new version numbers and then have all the in between commits reside in a separate develop branch from which the feature branches split from too. Basically I'm looking for a tool that will let me completely manually reorganise the tree. I thought maybe that interactive rebasing was what I was looking for but trying to do so in sourcetree makes it seem like it is not the right tool. Can anyone give me some advice on how best to proceed. Below is a diagram of my current structure: featureA x-x-x / \ master A-x-x-x-x-B-x-x-x-C D Desired structure: feature x-x-x / | develop x-x-x-x-x-x-x - / | | | master A - B - C - D

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >