Search Results

Search found 3492 results on 140 pages for 'subject'.

Page 75/140 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Advice and resources on collaborative environments

    - by Tjaart
    I need some advice on collaborative software environments. More specifically, I am looking for books and reference materials that can aid me in understanding team and code structures and the interactions thereof. In other words books, blogs or white papers explaining: Different strategies for structuring teams that share common code between each other but have distinct individual functions? To summarise my question I would like to know what would be a good source of knowledge if I were to set up teams in an organisation that shared code but each unit still remained autonomous. I have done some research on this subject and explored: code review tools, distributed VCS, continuous integration tools, Unit testing automation. The tough part about implementing these tools are to determine where a good place would be to start, which tools are low hanging fruit, which tools or methods provide higher success rates. If someone asks me about code quality reference I point them to Code Complete. I am looking for an equivalent guide on software team structures and tools to make this equation work better. I realise that this question is quite vague but it arose as "we need to share code between teams without breaking each others stuff and causing management headaches and reams of red tape" The answer is definitely not simple and requires changes on many levels, hence the question. If the question is too vague please vote to close or delete. I would accept any good starting point as an answer.

    Read the article

  • Must all AI states be able to react to any event?

    - by Prog
    FSMs implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. How is this used in games to design AI agents? Consider a simplified class Monster, representing an AI agent: class Monster { State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses implementing execute() differently. So far, classic State pattern. AI agents are subject to environmental effects and other objects communicating with them. For example, an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Is this correct? If correct, how is this done? Are all states obligated by their superclass to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • Dirt compression from vehicle tires

    - by Mungoid
    So I kinda have this working but its not correct because it just averages, so I wanted to know if anyone here has any ideas. I'm trying to simulate loose dirt compression under the tires of a vehicle to reduce the potential bumpiness of 'chunky' terrain. Currently how I do this is that I have a bounding box shape around my tires, set a little lower so they intersect with the terrain. Each frame, I (currently) average all of the heights of each point in the terrain that are within the box bounds of that tire, and then set them all to that average. Clearly this won't work in most cases because, for example, if i'm on a hill, the terrain will deform way too much. One way I thought was to have a max and min amount the points could raise and lower but that still doesn't seem to work properly and sometimes looks more like steps than smooth dirt. I wanna say that there is probably a bit more to this that what i'm currently doing but I am not sure where to look. Could anyone here shed some light on this subject? Would I benefit any by maybe looking up some smoothing algorith or something similar?

    Read the article

  • Web dev/programmer with 4.5 yrs experience. Better for career: self-study or master's degree? [closed]

    - by Anonymous Programmer
    I'm a 28 year-old web developer/programmer with 4.5 years of experience, and I'm looking to jump-start my career. I'm trying to decide between self-study and a 1-year master's program in CS at a top school. I'm currently making 65K in a high cost-of-living area that is NOT a hot spot for technology firms. I code almost exclusively in Ruby/Rails, PHP/CodeIgniter, SQL, and JavaScript. I've slowly gained proficiency with Git. Roughly half the time I am architecting/coding, and half the time I am pounding out HTML/CSS for static brochureware sites. I'd like to make more more money while doing more challenging/interesting work, but I don't know where to start. I have an excellent academic record (math major with many CS credits, 3.9+ GPA), GRE scores, and recommendations, so I am confident that I could be admitted to a great CS master's program. On the other hand, there is the tuition and opportunity cost to consider. I feel like there are a number of practical languages/tools/skills worth knowing that I could teach myself - shell scripting, .NET, Python, Node.js, MongoDB, natural language processing techniques, etc. That said, it's one thing to read about a subject and another thing to have experience with it, which structured coursework provides. So, on to the concrete questions: What programming skills/knowledge should I develop to increase my earning potential and make me competitive for more interesting jobs? Will a master's degree in CS from a top school help me develop the above skills/knowledge, and if so, is it preferable to self-study (possibly for other reasons, e.g., the degree's value as a credential)?

    Read the article

  • Advanced Exalytics, Endeca and OBI Training for Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 In Sweden and in Saudi Arabia there will be the next set of hands-on advanced Exalytics, Endeca and OBI Training workshops for Partners: partners from any country may attend these. These are free of charge to OPN Specialised member partners, but are subject to availability (please do not attend unless you have received a confirmation from Oracle to do so). Check the workshop agendas and pre-requisites, and register from the links below: Exalytics and Business Intelligence Hands-on Workshop (3-days) August 24-26, 2013: Oracle Riyadh, Saudi Arabia September 3-5, 2013: Oracle Kista, Sweden Endeca Information Discovery Hands-on Workshop (3-days) October 22-24, 2013: Oracle Kista, Sweden Advanced Oracle Business Intelligence Hands-on Workshop (3-days) August 19-21, 2013: Oracle Riyadh, Saudi Arabia /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Object desing problem for simple school application

    - by Aragornx
    I want to create simple school application that provides grades,notes,presence,etc. for students,teachers and parents. I'm trying to design objects for this problem and I'm little bit confused - because I'm not very experienced in class designing. Some of my present objects are : class PersonalData() { private String name; private String surename; private Calendar dateOfBirth; [...] } class Person { private PersonalData personalData; } class User extends Person { private String login; private char[] password; } class Student extends Person { private ArrayList<Counselor> counselors = new ArrayList<>(); } class Counselor extends Person { private ArrayList<Student> children = new ArrayList<>(); } class Teacher extends Person { private ArrayList<ChoolClass> schoolClasses = new ArrayList<>(); private ArrayList<Subject> subjects = new ArrayList<>(); } This is of course a general idea. But I'm sure it's not the best way. For example I want that one person could be a Teacher and also a Parent(Counselor) and present approach makes me to have two Person objects. I want that user after successful logging in get all roles that it has (Student or Teacher or (Teacher & Parent) ). I think I should make and use some interfaces but I'm not sure how to do this right. Maybe like this: interface Role { } interface TeacherRole implements Role { void addGrade( Student student, Grade grade, [...] ); } class Teacher implements TeacherRole { private Person person; [...] } class User extends Person{ ArrayList<Role> roles = new ArrayList<>(); } Please if anyone could help me to make this right or maybe just point me to some literature/article that covers practical objects design.

    Read the article

  • Can you use programming for a greater good? [closed]

    - by jdoig
    What are the paths one could take to use their programming skills to benefit mankind (good causes, scientific or medical advancement, etc)? Problem: I dropped out of school, learnt programming, on my own, from text books and the internet. I have 7+ years of commercial experience from web applications to big data to mobile apps. But all I seem to do is make rich people richer with the vain hope that one day I'll be the guy with the good idea using other people to make myself richer. I googled for simular posts on the subject and saw a lot of people saying... "Just do your 9-5 job and donate a lot to charity"... I'm sorry to sound selfish but thats not what makes me tick; I need to be invested in and excited about the project at hand; it's not only got to be for a greater good but it's got to kick arse and feel good doing it too... Does that kind of job exist? Does it involve programming? What other skills do I need? (Apologies if this question is too 'fluffy' or 'wishy-washy', but if it is a pointer to where else I could ask it would be appreciated)

    Read the article

  • Merging similar graphs based solely on the graph structure?

    - by Buttons840
    I am looking for (or attempting to design) a technique for matching nodes from very similar graphs based on the structure of the graph*. In the examples below, the top graph has 5 nodes, and the bottom graph has 6 nodes. I would like to match the nodes from the top graph to the nodes in the bottom graph, such that the "0" nodes match, and the "1" nodes match, etc. This seems logically possible, because I can do it in my head for these simple examples. Now I just need to express my intuition in code. Are there any established algorithms or patterns I might consider? (* When I say based on the structure of the graph, I mean the solution shouldn't depend on the node labels; the numeric labels on the nodes are only for demonstration.) I'm also interested in the performance of any potential solutions. How well will they scale? Could I merge graphs with millions of nodes? In more complex cases, I recognize that the best solution may be subject to interpretation. Still, I'm hoping for a "good" way to merge complex graphs. (These are directed graphs; the thicker portion of an edge represents the head.)

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Thinking about open-sourcing quiz project [closed]

    - by user72727
    I was thinking about starting an open source project. I have a few projects that might work ok as an open project but thought I might dip my toe into the water with a simple quiz project. The idea is you can add questions to a quiz, arrange questions by topic, difficulty or location. Users would hopefully get an interesting quiz, tuned to their ability. At the end they'd get a score and hopefully they might provide either some feedback on the questions or even supply a few questions of their own. I couldn't see a similar project (fame's last words). I have a basic version of the project that gives the user a bunch of questions to answer in 10 minutes. It doesn't currently group the questions into topics, and no feedback is taken. I've also been told the graphical questions don't work on Ipads for some reason. Would this be a suitable project to go open source? I did find various quiz's out there but all seemed rather narrowly focused. I really wanted something that could cover any type of question on any type of subject. I prefer to keep the questions in MySQL but I could see how this might make it more difficult for others to get on board - should I move to data files? How do I proceed? http://www.checkmypages.com/numbers

    Read the article

  • What online games would let me practice AI development?

    - by Myn
    I am working on a project experimenting with Artificial Intelligence design methodologies for online world avatars. Online world here is quite open to interpretation; Second Life is just as applicable as Counter Strike, for example. To carry out these experiments, I must first develop an intelligent agent for the world in question. However, I am honestly quite stuck as to which game I could use for this. My preference was to develop an intelligent "bot" to play an MMORPG, but the legal restrictions of such games prevent me. Likewise with most FPS games the use of an intelligent agent in place of a human player is considered cheating. The alternative, of course, is to create an NPC bot; an intelligent agent that populates the world alongside the player(s) rather than replacing a particular player. However, I'm struggling to find a game that would enable me to create an intelligent opponent either. I suppose the main requirements would be a game allows a third party program to use the function calls usually utilised by players and read feedback on the state of the world. Quake III and Unreal Tournament have been suggested before, but they have already been the subject of work on this research project. Short of writing my own online game from scratch, what games would allow me, through middleware, an API, or otherwise, to create either an artificially intelligent player or a bot?

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • There's Not an App for That (Yet)

    - by Mark Hesse
    With an earlier-than-normal departure this morning to avoid the stalemate known as traffic congestion, I suddenly realized what I had failed to grab on my way out the door...  my company ID badge.  Unfortunately, at the time of my epiphany, I was far enough into commuter no-man's land where turning back would completely negate my early departure and increase my overall drive time exponentially.  Not being one to retrace my steps, I decided to press on. Upon arrival at the office and with an hour to go before a security guard would be on duty, I started thinking about the number of times I had forgotten my ID vs. the number of times I had forgotten my phone.  While rare on both accounts, my ID was most likely the missing artifact. I then wondered why there isn't an app for my smartphone that allows me to verify my credentials with my employer and then, provided with a secure token for the day, have the ability to access my building's card entry system.  On many levels, this seems much more secure than an ID card which can be lost, stolen or even forged and then used simply by tailgating into and around buildings at facilities where card scanning can generally be avoided.   As it turns out, another building on the campus has 24 x 7 guard coverage, so I was able to gain access in a relatively short time and secure a temporary ID badge.  Once inside and online, a quick internet search on the subject of smartphone badge access shows that efforts are underway to do exactly what I was thinking needed to be done. Having not spent any time studying about the technology, I discovered that it relies on Near Field Communications (NFC) enabled smartphones (of which, mine does not provide).  The only other option would require modifications to the security infrastructure to support alternative authentication technologies, such as barcode readers, which would be extremely costly to implement. For now, my best option is to put my corporate ID under my car keys... 

    Read the article

  • Partner Spotlight: Compasso

    - by Joe Diemer
    EVERY MONDAY at 2 PM Eastern Time (USA) Join Compasso, one of Oracle's Enterprise Manager partners that is Specialized in Application Quality Management, for a series of presentations focused on Oracle end to end performance management. The objective is to provide a comprehensive overview of Oracle's Enterprise Manager 12c software.  Attendees will learn how Oracle can provide a complete end to end management  solution, for not only for Oracle Databases, but also other aspects of the Oracle technology, such as Middleware and Oracle applications.  Key areas include: 1. Next-Generation Management Framework which cover: Better performance and scalability Modular, extensible architecture Easier to manage and diagnose Enhanced security Web 2.0 UI 2. Enhanced Application-to-Disk Management End-to-end application performance management Fusion Applications management Exadata and Exalogic management 3. Complete Lifecycle Management for Enterprise Private Cloud Self-service provisioning Policy based resource and workload management Chargeback There are two ways you can sign up: 1. Call Nicole Wakefield of Compasso at +1 (971) 245-5042 2. Email [email protected] with Subject Line of "Monday Enterprise Manager Presentations" WebEx Conference details will be provided with your confirmation For more information about partner opportunities with Enterprise Manager, including becoming Specialized, click here. For more information about Compasso, click here.

    Read the article

  • How to create repeatable table with unique ID's using jQuery

    - by milbert
    I need to create a table structure that can be "copied" and populated with a new set of data. However, each table must have unique IDs for functions that must access them later. For example: <table class="main"> <thead><tr><th class="header"></th></tr></thead> <tbody> <tr class="row"><td class="col0"></td><td class="col1"></td></tr> </tbody> </table> My current thought is to use jQuery to load the table from a seperate html file into a variable. Using this saved table I could then create a function that copies it, traverses the table to add an ID to each section where information will need to be appended from a seperate data source, and return this new table. I am new to jQuery and feel like I may be missing an easier/better way to accomplish this. Any help on this subject would be appreciated.

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • How to prevent computer from automatically sleeping and/or hibernating?

    - by mehaase
    I'm running Ubuntu 12.04, and my laptop* won't wake from sleep/suspend/hibernate. (Is sleep the same thing as suspend?) I'm not even sure which of these things it's doing. When I am done working for the day, I lock my screen (Control-Alt-L). When I come back the next day, the screen is in power saving mode, and no amount of typing or clicking (on the usb keyboard/mouse or the builtin keyboard/trackpad) nor tapping the power button will bring it back to life. The only way I can get my machine to work is to hold down the power button until it shuts off, then press the power button again to turn it back on. Obviously, anything I had open from the previous day is pretty much gone -- in particular, my VMs all get rudely shut down without any warning. This is driving me INSANE. I spend the first hour of every work day trying to figure out how to get my computer to stop locking up over night. What I've tried: Editing the org.freedesktop.upower.policy to disable suspend and hibernate. Setting power management options in "Power" section of "System Settings". Looking at all power management options in the BIOS (none appear to be relevant to sleep/suspend/hibernate). Reading every forum post/askubuntu post that I can find that's even tangentially related to the subject. My question: how to disable the automatic sleep and/or hibernate (and/or anything similar) in Ubuntu 12.04. I don't care if it's still possible to sleep/suspend/hibernate/whatever by pushing buttons or running some command or reciting led zeppelin lyrics backwards. I just want my laptop to be ready for work in the morning. *The laptop is a Dell Latitude something or other. I don't want to get too specific because I've seen a lot of similar questions get closed for being too specific. I think my question is generic enough to stand -- it's a question about the latest, stable version of Ubuntu.

    Read the article

  • what is standard approach to create a responsive website using javascript,php ajax and perhaps zend framework [closed]

    - by shawndreck
    I am working on a web system currently and plans to heavily use javascript with ajax to make the user interface more friendlier, not fancy as such. The javascript will be used for client side form validation, data loading from server and creating proper content with the result, also to for floating windows during add/edit or external references. Here is a scenerio that could clearify my question. A user wants to update card but instead of jumping to another page to verify the available colors,size and prizes of product, those information are shown in a floating window and changes in the floating window can affect the underlying one. My question is : 1. What are some of the approaches to encounter this situation? 2. Are there any helpful tips, tricks and links on this subject? I am comfortable with js,php and zend. I would appreciate any advice,tip and tricks, problem solving approach to handle a situation like this! Thanks in advance. Hope this make sense.

    Read the article

  • How can I convert XML files to one CSV file in C#?

    - by TruMan1
    I have a collection of strings that are XML content. I want to iterate thru my collection and build a CSV file to stream to the user for download (sometimes it can be hundreds in the collection). This is my loop: foreach (string response in items.Responses) { string xmlResponse = response; //BUILD CSV HERE } This is what my XML content looks like for each iteration (xmlResponse). I want to put it in a flat file including the "properties" attributes: <?xml version="1.0"?> <response> <properties id="60375c90-9dd7-400f-aafb-a8726df409a9" name="Account Request" date="Thursday, March 04, 2010 2:14:07 PM" page="http://mydomain/sitefinity/CreateAccount.aspx" ip="192.168.1.255" browser="Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8" referrer="http://mydomain/sitefinity/CreateAccount.aspx" confirmation="True" subject="Email from website: Account Request Form" sender="[email protected]" recipients="[email protected], , " /> <fields> <field> <label>Personal Details</label> <value>Personal Details</value> </field> <field> <label>Name</label> <value>Tim Wales</value> </field> <field> <label>Email</label> <value>[email protected]</value> </field> <field> <label>Website</label> <value></value> </field> <field> <label>Password</label> <value></value> </field> <field> <label>Phone</label> <value></value> </field> <field> <label>Years in Business</label> <value></value> </field> <field> <label>Background</label> <value>Background</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Date of Birth</label> <value></value> </field> <field> <label>Some Label</label> <value>Some Label</value> </field> <field> <label>Industry</label> <value> Technology Other</value> </field> <field> <label>Pets</label> <value>Dog</value> </field> <field> <label>Your View</label> <value>Positive</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value></value> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> <?xml version="1.0"?> <response> <properties id="60375c90-9dd7-400f-aafb-a8726df409a9" Form="Account Request" Date="Tuesday, March 16, 2010 6:21:07 PM" Page="http://mydomain/sitefinity/Home.aspx" IP="fe80::1c0f57:9ee3%10" Browser="Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729)" Referrer="http://mydomain/sitefinity/Home.aspx" Subject="Email from website: Account Request Form" Sender="[email protected]" Recipients="[email protected]" Confirmation="True" /> <fields> <field> <label>Personal Details</label> <value>Personal Details</value> </field> <field> <label>Name</label> <value>erger</value> </field> <field> <label>Email</label> <value></value> </field> <field> <label>Website</label> <value></value> </field> <field> <label>Password</label> <value></value> </field> <field> <label>Phone</label> <value></value> </field> <field> <label>Years in Business</label> <value></value> </field> <field> <label>Background</label> <value>Background</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Date of Birth</label> <value></value> </field> <field> <label>Some Label</label> <value>Some Label</value> </field> <field> <label>Industry</label> <value> Technology Service</value> </field> <field> <label>Pets</label> <value>Dog</value> </field> <field> <label>Your View</label> <value>Positive</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value></value> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> <?xml version="1.0"?> <response> <properties id="60375c90-9dd7-400f-aafb-a8726df409a9" Form="Account Request" Date="Tuesday, March 16, 2010 4:50:17 PM" Page="http://mydomain/sitefinity/Home.aspx" IP="fe80::1c0f:ee3%10" Browser="Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729)" Referrer="http://mydomain/sitefinity/Home.aspx" Subject="Email from website: Account Request Form" Sender="[email protected]" Recipients="[email protected]" Confirmation="True" /> <fields> <field> <label>Personal Details</label> <value>Personal Details</value> </field> <field> <label>Name</label> <value>esfs</value> </field> <field> <label>Email</label> <value></value> </field> <field> <label>Website</label> <value></value> </field> <field> <label>Password</label> <value></value> </field> <field> <label>Phone</label> <value></value> </field> <field> <label>Years in Business</label> <value></value> </field> <field> <label>Background</label> <value>Background</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Date of Birth</label> <value></value> </field> <field> <label>Some Label</label> <value>Some Label</value> </field> <field> <label>Industry</label> <value> Technology Service</value> </field> <field> <label>Pets</label> <value>Dog</value> </field> <field> <label>Your View</label> <value>Positive</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value></value> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> Can anyone help with this?

    Read the article

  • Blank Mail from PHP application

    - by brettlwilliams
    Problem: Blank email from PHP web application. Confirmed: App works in Linux, has various problems in Windows server environment. Blank emails are the last remaining problem. PHP Version 5.2.6 on the server I'm a librarian implementing a PHP based web application to help students complete their assignments.I have installed this application before on a Linux based free web host and had no problems. Email is controlled by two files, email_functions.php and email.php. While email can be sent, all that is sent is a blank email. My IT department is an ASP only shop, so I can get little to no help there. I also cannot install additional libraries like PHPmail or Swiftmailer. You can see a functional copy at http://rpc.elm4you.org/ You can also download a copy from Sourceforge from the link there. Thanks in advance for any insight into this! email_functions.php <?php /********************************************************** Function: build_multipart_headers Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Creates email headers for a message of type multipart/mime This will include a plain text part and HTML. **********************************************************/ function build_multipart_headers($boundary_rand) { global $EMAIL_FROM_DISPLAY_NAME, $EMAIL_FROM_ADDRESS, $CALC_PATH, $CALC_TITLE, $SERVER_NAME; // Using \n instead of \r\n because qmail doubles up the \r and screws everything up! $crlf = "\n"; $message_date = date("r"); // Construct headers for multipart/mixed MIME email. It will have a plain text and HTML part $headers = "X-Calc-Name: $CALC_TITLE" . $crlf; $headers .= "X-Calc-Url: http://{$SERVER_NAME}/{$CALC_PATH}" . $crlf; $headers .= "MIME-Version: 1.0" . $crlf; $headers .= "Content-type: multipart/alternative;" . $crlf; $headers .= " boundary=__$boundary_rand" . $crlf; $headers .= "From: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Sender: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Reply-to: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Return-Path: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Date: $message_date" . $crlf; $headers .= "Message-Id: $boundary_rand@$SERVER_NAME" . $crlf; return $headers; } /********************************************************** Function: build_multipart_body Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Builds the email body content to go with the headers from build_multipart_headers() **********************************************************/ function build_multipart_body($plain_text_message, $html_message, $boundary_rand) { //$crlf = "\r\n"; $crlf = "\n"; $boundary = "__" . $boundary_rand; // Begin constructing the MIME multipart message $multipart_message = "This is a multipart message in MIME format." . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/plain; charset=\"us-ascii\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $plain_text_message . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/html; charset=\"iso-8859-1\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $html_message . $crlf . $crlf; $multipart_message .= "--{$boundary}--$crlf$crlf"; return $multipart_message; } /********************************************************** Function: build_step_email_body_text Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Returns a plain text version of the email body to be used for individually sent step reminders **********************************************************/ function build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $teacher_info ,$name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $step_email_body =<<<BODY $CALC_TITLE Step $stepnum: {$arr_instructions["step$stepnum"]["title"]} Name: $name Class: $class BODY; $step_email_body .= build_text_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .= "\n\n"; $step_email_body .=<<<FOOTER The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! If you would like to stop receiving further reminders for this project, click the link below: http://$SERVER_NAME/$CALC_PATH/deleteproject.php?proj=$project_id FOOTER; // Wrap text to 78 chars per line // Convert any remaining HTML <br /> to \r\n // Strip out any remaining HTML tags. $step_email_body = strip_tags(linebreaks_html2text(wordwrap($step_email_body, 78, "\n"))); return $step_email_body; } /********************************************************** Function: build_step_email_body_html Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Same as above, but with HTML **********************************************************/ function build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $teacher_info, $name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $styles = build_html_styles(); $step_email_body =<<<BODY <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> BODY; $step_email_body .= build_html_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .=<<<FOOTER <p> The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! </p> <p> If you would like to stop receiving further reminders for this project, <a href="http://{$SERVER_NAME}/$CALC_PATH/deleteproject.php?proj=$project_id">click this link.</a> </p> </body> </html> FOOTER; return $step_email_body; } /********************************************************** Function: build_html_styles Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Just returns a string of <style /> for the HTML message body **********************************************************/ function build_html_styles() { $styles =<<<STYLES <style type="text/css"> body { font-family: Arial, sans-serif; font-size: 85%; } h1 { font-size: 120%; } table { border: none; } tr { vertical-align: top; } img { display: none; } hr { border: 0; } </style> STYLES; return $styles; } /********************************************************** Function: linebreaks_html2text Author: Michael Berkowski Last Modified: October 2007 *********************************************************** Purpose: Convert <br /> html tags to \n line breaks **********************************************************/ function linebreaks_html2text($in_string) { $out_string = ""; $arr_br = array("<br>", "<br />", "<br/>"); $out_string = str_replace($arr_br, "\n", $in_string); return $out_string; } ?> email.php <?php require_once("include/config.php"); require_once("include/instructions.php"); require_once("dbase/dbfunctions.php"); require_once("include/email_functions.php"); ini_set("sendmail_from", "[email protected]"); ini_set("SMTP", "mail.qatar.net.qa"); // Verify that the email has not already been sent by checking for a cookie // whose value is generated each time the form is loaded freshly. if (!(isset($_COOKIE['rpc_transid']) && $_COOKIE['rpc_transid'] == $_POST['transid'])) { // Setup some preliminary variables for email. // The scanning of $_POST['email']already took place when this file was included... $to = $_POST['email']; $subject = $EMAIL_SUBJECT; $boundary_rand = md5(rand()); $mail_type = ""; switch ($_POST['reminder-type']) { case "progressive": $arr_dbase_dates = array(); $conn = rpc_connect(); if (!$conn) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } // Sanitize all the data that will be inserted into table... // We need to remove "CONTENT-TYPE:" from name/class to defang them. // Additionall, we can't allow any line-breaks in those fields to avoid // hacks to email headers. $ins_name = mysql_real_escape_string($name); $ins_name = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_name); $ins_name = str_replace("\n", "", $ins_name); $ins_class = mysql_real_escape_string($class); $ins_class = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_class); $ins_class = str_replace("\n", "", $ins_class); $ins_email = mysql_real_escape_string($email); $ins_teacher_info = $teacher_info ? "YES" : "NO"; switch ($format) { case "Slides": $ins_format = "SLIDES"; break; case "Video": $ins_format = "VIDEO"; break; case "Essay": default: $ins_format = "ESSAY"; break; } // The transid from the previous form will be used as a project identifier // Steps will be grouped by project identifier. $ins_project_id = mysql_real_escape_string($_POST['transid'] . md5(rand())); $arr_dbase_dates = dbase_dates($dates); $arr_past_dates = array(); // Iterate over the dates array and build a SQL statement for each one. $insert_success = TRUE; // $min_reminder_date = date("Ymd", mktime(0,0,0,date("m"),date("d")+$EMAIL_REMINDER_DAYS_AHEAD,date("Y"))); for ($date_index = 0; $date_index < sizeof($arr_dbase_dates); $date_index++) { // Make sure we're using the right keys... $ins_date_index = $date_index + 1; // The insert will only happen if the date of the event is in the future. // For dates today and earlier, no insert. // For dates today or after the reminder deadline, we'll send the email immediately after the inserts. if ($arr_dbase_dates[$date_index] > (int)$min_reminder_date) { $qry =<<<QRY INSERT INTO email_queue ( NOTIFICATION_ID, PROJECT_ID, EMAIL, NAME, CLASS, FORMAT, TEACHER_INFO, STEP, MESSAGE_DATE ) VALUES ( NULL, '$ins_project_id', '$ins_email', '$ins_name', '$ins_class', '$ins_format', '$ins_teacher_info', $ins_date_index, /*step number*/ {$arr_dbase_dates[$date_index]} /* Date in the integer format yyyymmdd */ ) QRY; // Attempt to do the insert... $result = mysql_query($qry); // If even one insert fails, bail out. if (!$result) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } } // For dates today or earlier, store the steps=>dates in an array so the mails can // be sent immediately. else { $arr_past_dates[$ins_date_index] = $arr_dbase_dates[$date_index]; } } // Close the connection resources. mysql_close($conn); // SEND OUT THE EMAILS THAT HAVE TO GO IMMEDIATELY... // This should only be step 1, but who knows... //var_dump($arr_past_dates); for ($stepnum=1; $stepnum<=sizeof($arr_past_dates); $stepnum++) { $email_teacher_info = ($teacher_info && $EMAIL_TEACHER_REMINDERS) ? TRUE : FALSE; $boundary = md5(rand()); $plain_text_body = build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $html_body = build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $multipart_headers = build_multipart_headers($boundary); $multipart_body = build_multipart_body($plain_text_body, $html_body, $boundary); mail($to, $subject . ": Step " . $stepnum, $multipart_body, $multipart_headers, "[email protected]"); } // Set appropriate flags and messages $mail_success = TRUE; $mail_status_message = "Email address registered!"; $mail_type = "progressive"; set_mail_success_cookie(); break; // Default to a single email message. case "single": default: // We don't want to send images in the message, so strip them out of the existing structure. // This big ugly regex strips the whole table cell containing the image out of the table. // Must find a better solution... //$email_table_html = eregi_replace("<td class=\"stepImageContainer\" width=\"161px\">[\s\r\n\t]*<img class=\"stepImage\" src=\"images/[_a-zA-Z0-9]*\.gif\" alt=\"Step [1-9]{1} logo\" />[\s\r\n\t]*</td>", "\n", $table_html); // Show more descriptive text based on the value of $format switch ($format) { case "Video": $format_display = "Video"; break; case "Slides": $format_display = "Presentation with electronic slides"; break; case "Essay": default: $format_display = "Essay"; break; } $days = (int)$days; $html_message = ""; $styles = build_html_styles(); $html_message =<<<HTMLMESSAGE <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> <strong>Email:</strong> $email <br /> <strong>Assignment type:</strong> $format_display <br /><br /> <strong>Starting on:</strong> $date1 <br /> <strong>Assignment due:</strong> $date2 <br /> <strong>You have $days days to finish.</strong><br /> <hr /> $email_table_html </body> </html> HTMLMESSAGE; // Create the plain text version of the message... $plain_text_message = strip_tags(linebreaks_html2text(build_text_all_steps($arr_instructions, $dates, $query_string, $teacher_info))); // Add the title, since it doesn't get built in by build_text_all_steps... $plain_text_message = $CALC_TITLE . " Schedule\n\n" . $plain_text_message; $plain_text_message = wordwrap($plain_text_message, 78, "\n"); $multipart_headers = build_multipart_headers($boundary_rand); $multipart_message = build_multipart_body($plain_text_message, $html_message, $boundary_rand); $mail_success = FALSE; if (mail($to, $subject, $multipart_message, $multipart_headers, "[email protected]")) { $mail_success = TRUE; $mail_status_message = "Email sent!"; $mail_type = "single"; set_mail_success_cookie(); } else { $mail_success = FALSE; $mail_status_message = "Could not send email!"; } break; } } function set_mail_success_cookie() { // Prevent the mail from being resent on page reload. Set a timestamp cookie. // Expires in 24 hours. setcookie("rpc_transid", $_POST['transid'], time() + 86400); } ?>

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >