Search Results

Search found 14265 results on 571 pages for 'technical favorite features'.

Page 478/571 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Chalk Talk with John: How Does SOA Add Value to Your Enterprise?

    - by John Brunswick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this episode of Chalk Talk with John we revisit our town of Middleware Fields from What Does User Experience Mean to You? to look at demystifying the business value of SOA. Middleware fields is an extremely eco-conscious community and has been trying to setup a commuting program for their employees. Though a good idea, they soon run into challenges ensuring that people are able to use the commuting services easily.  Take a look below to see how SOA is like a transit pass for your enterprise and how it addresses common issues you may have with your enterprise systems. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About me: Hi, I am John Brunswick, an Oracle Enterprise Architect. As an Oracle Enterprise Architect, I focus on the alignment of technical capabilities in support of business vision and objectives, as well as the overall business value of technology.  Before coming to Oracle, I was a Practice Manager within BEA System's Business Interaction Division consulting organization, orchestrating enterprise systems in support of line of business goals. Follow me on Twitter and visit my site for Oracle Fusion Middleware related tips.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Looking into the JQuery Overlays Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery Overlays Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here.I will be writing more posts regarding the most commonly used JQuery Plugins. With the JQuery Overlays Plugin we can provide the user (overlay) with more information about an image when the user hovers over the image. I have been using extensively this plugin in my websites. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5) <html lang="en"> <head>    <link rel="stylesheet" href="ImageOverlay.css" type="text/css" media="screen" />    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>    <script type="text/javascript" src="jquery.ImageOverlay.min.js"></script>         <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>   </head><body>    <ul id="Liverpool" class="image-overlay">        <li>            <a href="www.liverpoolfc.com">                <img alt="Liverpool" src="championsofeurope.jpg" />                <div class="caption">                    <h3>Liverpool Football club</h3>                    <p>The greatest club in the world</p>                </div>            </a>        </li>    </ul></body></html> This is a very simple markup. I have added references to the JQuery library (current version is 1.8.3) and the JQuery Overlays Plugin. Then I add 1 image in the element with "id=Liverpool". There is a caption class as well, where I place the text I want to show when the mouse hovers over the image. The caption class and the Liverpool id element are styled in the ImageOverlay.css file that can also be downloaded with the plugin.You can style the various elements of the html markup in the .css file. The Javascript code that makes it all happen follows.   <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>        I am just calling the ImageOverlay function for the Liverpool ID element.The contents of ImageOverlay.css file follow .image-overlay { list-style: none; text-align: left; }.image-overlay li { display: inline; }.image-overlay a:link, .image-overlay a:visited, .image-overlay a:hover, .image-overlay a:active { text-decoration: none; }.image-overlay a:link img, .image-overlay a:visited img, .image-overlay a:hover img, .image-overlay a:active img { border: none; }.image-overlay a{    margin: 9px;    float: left;    background: #fff;    border: solid 2px;    overflow: hidden;    position: relative;}.image-overlay img{    position: absolute;    top: 0;    left: 0;    border: 0;}.image-overlay .caption{    float: left;    position: absolute;    background-color: #000;    width: 100%;    cursor: pointer;    /* The way to change overlay opacity is the follow properties. Opacity is a tricky issue due to        longtime IE abuse of it, so opacity is not offically supported - use at your own risk.         To play it safe, disable overlay opacity in IE. */    /* For Firefox/Opera/Safari/Chrome */    opacity: .8;    /* For IE 5-7 */    filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=80);    /* For IE 8 */    -MS-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=80)";}.image-overlay .caption h1, .image-overlay .caption h2, .image-overlay .caption h3,.image-overlay .caption h4, .image-overlay .caption h5, .image-overlay .caption h6{    margin: 10px 0 10px 2px;    font-size: 26px;    font-weight: bold;    padding: 0 0 0 5px;    color:#92171a;}.image-overlay p{    text-indent: 0;    margin: 10px;    font-size: 1.2em;} It couldn't be any simpler than that. I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine.Have a look at the picture below. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • SQL Authority News – Presenting at SQL Bangalore on May 3, 2014 – Performing an Effective Presentation

    - by Pinal Dave
    SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we have SQL Server 2014 theme and we are going to have a community launch on this subject. We have the best of the best speakers presenting on SQL Server 2014 technology. Looking at the whole line of celebrity speakers, I have decided not to present on SQL Server. I will be presenting on the performance tuning subject, but with the twist of soft skills. I will be presenting on “Performing an Effective Presentation“. Trust me, you do not want to miss this presentation, I will be presenting on how to present effectively when presenting SQL Server topics. What this session will NOT have I personally believe that we all are good presenters most of the time. We can all easily call out if someone is bad presenter. There is no point talking about basics like bigger bullet points, talk loudly, talk with confidence, use better analogies etc. In simple words – this is not going to some philosophy session and boring notes. What this session will have Well, this session will tell stories of my life. It will tell how we can present about technology and SQL Server with the help of stories and personal experience. I am going to tell stories about two legends  who have inspired me. Right after that we will be doing two exercises together where we will learn quickly and effectively, how to become better speaker – instantly! There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn Here are few of the slides from this presentation: Here is the details about the event and location Venue:Microsoft Corporation, Signature Building,Embassy Golf Links Business Park, Intermediate Ring Road, Domlur, Bangalore – 560071 The agenda is amazing – we have top line SQL Speakers. Everyone is welcome and don’t forget to get your friend along for this event. Loads to learn and tons to share !!! Keynote (20 mins) by Anupam Tiwari – Business Program Manager – GTSC Backup Enhancements with SQL Server 2014 by Amit Banerjee – PFE Microsoft Performance Enhancements with SQL Server 2014 by Sourabh Agarwal - PFE Microsoft LUNCH BREAK Performing an effective Presentation by Pinal Dave – Community Member (SQLAuthority.com) InMemory Enhancements with SQL Server 2014 by Balmukund Lakhani – Support Escalation Engg. Microsoft Some more lesser known enhancements with SQL Server 2014 by Vinod Kumar – Technical Architect Microsoft MTC Power Packed – Power BI with SQL Server by Kane Conway – Support Escalation Engg. Microsoft I am very big fan of Amit, Balmukund and Vinod – I have always watched their session and this time, I am going to once again attend their session without missing a single min. They are SQL legends, I am going to be there and learn when they are sharing their knowledge.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • Software Architecture and MEF composition location

    - by Leonardo
    Introduction My software (a bunch of webapi's) consist of 4 projects: Core, FrontWebApi, Library and Administration. Library is a code library project that consists of only interfaces and enumerators. All my classes in other projects inherit from at least one interface, and this interface is in the library. Generally speaking, my interfaces define either Entities, Repositories or Controllers. This project references no other project or any special dlls... just the regular .Net stuff... Core is a class-library project where concrete implementation of Entities and Repositories. In some cases i have more than 1 implementation for a Repository (ex: one for azure table storage and one for regular Sql). This project handles the intelligence (business rules mostly) and persistence, and it references only the Library. FrontWebApi is a ASP.NET MVC 4 WebApi project that implements the controllers interfaces to handle web-requests (from a mobile native app)... It references the Core and the Library. Administration is a code-library project that represents a "optional-module", meaning: if it is present, it provides extra-features (such as Access Control Lists) to the application, but if its not, no problem. Administration is also only referencing the Library and implementing concrete classes of a few interfaces such as "IAccessControlEntry"... I intend to make this available with a "setup" that will create any required database table or anything like that. But it is important to notice that the Core has no reference to this project... Development Now, in order to have a decoupled code I decide to use IoC and because this is a small project, I decided to do it using MEF, specially because of its advertised "composition" capabilities. I arranged all the imports/exports and constructors and everything, but something is quite not perfect in my "mental-visualisation": Main Question Where should I "Compose" the objects? I mean: Technically, the only place where real implementation access is required is in the Repositories, because in order to retrieve data from wherever, entities instances will be necessary, and in all other places. The repositories could also provide a public "GetCleanInstanceOf()" right? Then all other places will be just fine working with the interfaces instead of concrete classes... Secondary Question Should "Administration" implement the concrete object for "IAccessControlGeneralRepository" or the Core should?

    Read the article

  • Appointment & Booking Calendar for 2,500 Members

    - by D. K. Mason
    For this job I need a booking or appointment calendar for the WP website, so that each of the 2500+ members can manage his/her own appointments calendar. Members will set their schedules and available hours. Buyers can select one member, and book when they want to visit the member and reserve time online. Lets say our WP membership site has thousands of members each offering one service. We have 25 categories. On their profile page is his appointment or booking calendar, along with his personally created and uploaded (into S3) profile video. Website visitors should be able to easily book hours & days as desired. For now all members have a free membership. WE earn $ by bringing customers to the registered members and collecting one dollar for our service. Therefore, we need an affordable script and one that handles our members needs. We already have a paid copy of Jrox. What are those needs? Well, please register for a test account in the ENTERTAINMENT category at http://asianhighway26.com/?page_id=140 Next, pretend you are a buyer and start on the index page and select your ENTERTAINMENT category. Click on image #3 and you will receive a list of others in your traveling area. When you click VIEW DETAILS, there is where a potential customer will see the calendar and if you are available on the dates and times you are interested in doing business. We know that YOU the member will want to list your not available dates and hours. There may be other features desired but this is the most important, we believe. Our WP administrator (me) can install the main script, but each member must have some login or dashboard to manage his/her own calendar and work hours. Can you create or modify any appointment calendar software to do this? Can the jam.jrox.com software we own do this? As you can see here, we offer to pay for ideas you present that we use: asianhighway26.com/?page_id=126 D.K. Mason Chief Development Officer, Asian Highway Network, Tourism Division, Asia-Pacific Region (http://www.forums.doctormason.us/b-ah/)

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • VS2010 Launch Presentations

    Last week I was in Vegas to present at the DevConnections / VS2010 Launch event.  The show was well-attended and everybody I spoke to agreed it was educational and enjoyable.  My three talks were all on Wednesday, 14 April 2010, including one at 8am for which I was impressed to see a large turnout in attendance.   Pragmatic ASP.NET Tips, Tricks, and Tools My first session was on tips, tricks, and tools for ASP.NET developers.  This is a talk Ive given in past years, but which I refine every time.  I usually like to have a full session to devote to tools, and a separate talk just for Tips and Tricks, but for this show I was only given the one 75-minute slot, so I had to cut some materials to make things fit.  The talk went well, all the demos work, and the attendees seemed to enjoy it, and I like giving it, so hopefully I can continue to present on this topic in future DevConnections shows. Download the ASP.NET Tips, Tricks, and Tools slides and demos.   Whats New in ASP.NET MVC 2 My second talk of the day followed immediately after the Tips and Tricks talk, and was a brand new talk for me.  I have to throw out a thank-you to Phil for letting me see his MIX slide deck before he gave his talk, as that was a big help.  The official whats new document online is also worth checking out if youre interested in this subject.  Download the Whats New in ASP.NET MVC 2 slides and demos.   SOLIDify Your ASP.NET MVC 2 Application Just because youre using a ASP.NET MVC doesnt mean your code cant still end up being a big ball of mud.  This session describes a number of principles of software design that can help ensure applications remain loosely-coupled and malleable even as they age and increase in features and complexity.  This was my last talk of the day and did have one minor demo failure involving a database constraint.  Ive given this talk many times before, and in this case I had to fit it into a 60-minute timeslot, so Im not sure I had quite enough time to drive home all of the concepts to everyone in the audience.  That said, I did hear a number of positive comments on how the talk went, so thats encouraging. Download the SOLIDify Your ASP.NET MVC 2 Application slides and demos.   In my sessions, I promised to have these posted by the end of the weekend theyre going up at 10pm Sunday night (my time) 2 hours to spare!  Enjoy! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JavaOne User Group Sunday

    - by Tori Wieldt
    Before any "official" sessions of JavaOne 2012, the Java community was already sizzling. User Group Sunday was a great success, with several sessions offered by Java community members for anyone wanting to attend. Sessions were both about Java and best practices for running a JUG. Technical sessions included "Autoscaling Web Java Applications: Handle Peak Traffic with Zero Downtime and Minimized Cost,"  "Using Java with HTML5 and CSS3," and "Gooey and Sticky Bits: Everything You Ever Wanted to Know About Java." Several sessions were about how to start and run a JUG, like "Getting Speakers, Finding Sponsors, Planning Events: A Day in the Life of a JUG" and "JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group." Badr ElHouari and Faiçal Boutaounte presented the session "Why Communities Are Important and How to Start One." They used the example of the Morocco JUG, which they started. Before the JUG, there was no "Java community," they explained. They shared their best practices, including: have fun, enjoy what you are doing get a free venue to have regular meetings, a University is a good choice run a conference, it gives you visibility and brings in new members students are a great way to grow a JUG Badr was proud to mention JMaghreb, a first-time conference that the Morocco JUG is hosting in November. They have secured sponsors and international speakers, and are able to offer a free conference for Java developers in North Africa. The session also included a free-flowing discussion about recruiters (OK to come to meetings, but not to dominate them), giving out email addresses (NEVER do without permission), no-show rates (50% for free events) and the importance of good content (good speakers really help!). Trisha Gee, member of the London Java Community (LJC) was one of the presenters for the session "Benefits of Open Source." She explained how open sourcing the LMAX Disruptor (a high performance inter-thread messaging library) gave her company LMAX several benefits, including more users, more really good quality new hires, and more access to 3rd party companies. Being open source raised the visibility of the company and the product, which was good in many ways. "We hired six really good coders in three months," Gee said. They also got community contributors for their code and more cred with technologists. "We had been unsuccessful at getting access to executives from other companies in the high-performance space. But once we were open source, the techies at the company had heard of us, knew our code was good, and that opened lots of doors for us." So, instead of "giving away the secret sauce," by going open source, LMAX gained many benefits. "It was a great day," said Bruno Souza, AKA The Brazilian Java Man, "the sessions were well attended and there was lots of good interaction." Sizzle and steak!

    Read the article

  • Introducing Windows Azure Mobile Services

    - by Clint Edmonson
    Today I’m excited to share that the Windows Azure Mobile Services public preview is now available. This preview provides a turnkey backend cloud solution designed to accelerate connected client app development. These services streamline the development process by enabling you to leverage the cloud for common mobile application scenarios such as structured storage, user authentication and push notifications. If you’re building a Windows 8 app and want a fast and easy path to creating backend cloud services, this preview provides the capabilities you need. You to take advantage of the cloud to build and deploy modern apps for Windows 8 devices in anticipation of general availability on October 26th. Subsequent preview releases will extend support to iOS, Android, and Windows Phone. Features The preview makes it fast and easy to create cloud services for Windows 8 applications within minutes. Here are the key benefits:  Rapid development: configure a straightforward and secure backend in less than five minutes. Create modern mobile apps: common Windows Azure plus Windows 8 scenarios that Windows Azure Mobile Services preview will support include:  Automated Service API generation providing CRUD functionality and dynamic schematization on top of Structured Storage Structured Storage with powerful query support so a Windows 8 app can seamlessly connect to a Windows Azure SQL database Integrated Authentication so developers can configure user authentication via Windows Live Push Notifications to bring your Windows 8 apps to life with up to date and relevant information Access structured data: connect to a Windows Azure SQL database for simple data management and dynamically created tables. Easy to set and manage permissions. Pricing One of the key things that we’ve consistently heard from developers about using Windows Azure with mobile applications is the need for a low cost and simple offer. The simplest way to describe the pricing for Windows Azure Mobile Services at preview is that it is the same as Windows Azure Websites during preview. What’s FREE? Run up to 10 Mobile Services for free in a multitenant environment Free with valid Windows Azure Free Trial 1GB SQL Database Unlimited ingress 165MB/day egress  What do I pay for? Scaling up to dedicated VMs Once Windows Azure Free Trial expires - SQL Database and egress     Getting Started To start using Mobile Services, you will need to sign up for a Windows Azure free trial, if you have not done so already.  If you already have a Windows Azure account, you will need to request to enroll in this preview feature. Once you’ve enrolled, this getting started tutorial will walk you through building your first Windows 8 application using the preview’s services. The developer center contains more resources to teach you how to: Validate and authorize access to data using easy scripts that execute securely, on the server Easily authenticate your users via Windows Live Send toast notifications and update live tiles in just a few lines of code Our pricing calculator has also been updated for calculate costs for these new mobile services. Questions? Ask in the Windows Azure Forums. Feedback? Send it to [email protected].

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • SharePoint 2010 Hosting :: How to Enable Office Web Apps on SharePoint 2010

    - by mbridge
    Office Web App is the online version of Microsoft Office 2010 which is very helpful if you are going to use SharePoint 2010 in your organization as it allows you to do basic editing of word document without installing the Office Suite in the client machine. Prerequisites : - Microsoft Server 2008 R2 - Microsoft SharePoint Server 2010 or Microsoft SharePoint Foundation 2010 - Microsoft Office Web Apps. If you have installed all the above products, just follow this steps: 1. Go to Central Administration > Click on Manage Service Application. 2. All the menus are not displayed in ribbon Menu format which was first introduced in Office 2007. Click on New > Word Viewing Services ( You can choose PowerPoint or Excel also, steps are same ). This will open a pop window. Adding Services for Office Web Apps 3. Give a Proper Name which can have your companies or project name. 4. Under Application Pool select : SharePoint Web Services Default. 5. Next keep the check box checked which says : Add this service application’s proxy to the farm’s default proxy list. Click Ok Adding Word Viewer as Service Application Office Web Apps as Services in Sharepoint 2010 6. This will install all the Office Web App services required. You can see the name as you gave in the above step. How to Activate Office Web Apps in Site Collection? 1. Go to the site for which you want to activate this feature. 2. Click on Site Action > Site Settings > Site Collection Administrator > Site Collection Features 3. Activate Office Web Apps. Activate Office Web Apps Feature in Site Collection How to make sure Office Web Apps is working for your site collection? 1. Locate any office document you have and click on the smart menu which appears when you hover your mouse on it. Dont double-click as this will launch the document in Office Client if its installed. This feature can be changed. 2. If you see View or Edit in Browser as menu item, your Office Web Apps is configured correctly. View Edit Office Document in Browser Editing Office Document in Browser Another post related SharePoint 2010: 1. How to Configure SharePoint Foundation 2010 for SharePoint Workspace 2010 2. Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • It is CX a new concept?

    - by Isabel F. Peñuelas
    The Marketing Industry and the Web Industry are talking about CX since some time. However it is only very recently that the concept has reached some common meaning accepted by the analysts’ and the IT community. The new CX model depends on two previous facts: the expansion of the social media, and the impact of the new advanced features of mobile devices regarding brand-customer interaction. CXsers vs UXers First there is some need of disambiguity between User Experience and Customer Experience. User Experience -UX, is a much well established concept related with the design of user interactions for particular devices. UX people are interested on multiple touch points of digital interfaces while CX people are interested on all kind of interfaces including physical ones. UX is an evolution of Web Usability, while CX is a marketing concept. UX is an instrument of User Experience. CX in fact is all about Connections and Interactions. Connections Dan Draper, the creative director Mad Men, understands very well that to market effectively means to connect with people, and the best way to connect to people is to use the connections people have with other people: understanding Social Media connections and taking the customer pulse of customers on those medias, and are strong facilitators of CX strategies.  Interactions We can very simply define CX as the relationship that a customer establishes with a brand through multiple touch points (interactions, channels) through the entire life cycle of his relationship- direct or indirect with the brand. Interactions can be grouped on Customer Journeys through multiple touch points defined as the path a customer follows to achieve a goal. Processes A customer journey today usually starts at the moment he surfs the Web, then he takes a purchase decision; purchases the product;  request a particular service and finally recommends or do not recommends the product.  Customer Journeys are processes, and to analyze customer journeys there exists today a broad offering of modern Customer Journey tools very similar actually to the use cases or UML activity diagrams for IT systems design. As a summary CX is nothing more and nothing less than applying process analysis methods for better understanding how to create value through customer interactions across the multiple user´s touch points with the brand.

    Read the article

  • Code Coverage for Maven Integrated in NetBeans IDE 7.2

    - by Geertjan
    In NetBeans IDE 7.2, JaCoCo is supported natively, i.e., out of the box, as a code coverage engine for Maven projects, since Cobertura does not work with JDK 7 language constructs. (Although, note that Cobertura is supported as well in NetBeans IDE 7.2.) It isn't part of NetBeans IDE 7.2 Beta, so don't even try there; you need some development build from after that. I downloaded the latest development build today. To enable JaCoCo features in NetBeans IDE, you need do no different to what you'd do when enabling JaCoCo in Maven itself, which is rather wonderful. In both cases, all you need to do is add this to the "plugins" section of your POM: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.5.7.201204190339</version> <executions> <execution> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> Now you're done and ready to examine the code coverage of your tests, whether they are JUnit or TestNG. At this point, i.e., for no other reason than that you added the above snippet into your POM, you will have a new Code Coverage menu when you right-click on the project node: If you click Show Report above, the Code Coverage Report window opens. Here, once you've run your tests, you can actually see how many classes have been covered by your tests, which is pretty useful since 100% tests passing doesn't mean much when you've only tested one class, as you can see very graphically below: Then, when you click the bars in the Code Coverage Report window, the class under test is shown, with the methods for which tests exist highlighted in green and those that haven't been covered in red: (Note: Of course, striving for 100% code coverage is a bit nonsensical. For example, writing tests for your getters and setters may not be the most useful way to spend one's time. But being able to measure, and visualize, code coverage is certainly useful regardless of the percentage you're striving to achieve.) Best of all about all this is that everything you see above is available out of the box in NetBeans IDE 7.2. Take a look at what else NetBeans IDE 7.2 brings for the first time to the world of Maven: http://wiki.netbeans.org/NewAndNoteworthyNB72#Maven

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Removing Menu Items from Window Tabs

    - by Geertjan
    So you're working on your NetBeans Platform application and you notice that when you right-click on tabs in the predefined windows, e.g., the Projects window, you see a long list of popup menus. For whatever the reason is, you decide you don't want those popup menus. You right-click the application and go to the Branding dialog. There you uncheck the checkboxes that are unchecked below: As you can see above, you've removed three features, all of them related to closing the windows in your application. Therefore, "Close" and "Close Group" are now gone from the list of popup menus: But that's not enough. You also don't want the popup menus that relate to maximizing and minimizing the predefined windows, so you uncheck those checkboxes that relate to that: And, hey, now they're gone too: Next, you decide to remove the feature for floating, i.e., undocking the windows from the main window: And now they're gone too: However, even when you uncheck all the remaining checkboxes, as shown here... You're still left with those last few pesky popup menu items that just will not go away no matter what you do: The reason for the above? Those actions are hardcoded into the action list, which is a bug. Until it is fixed, here's a handy workaround: Set an implementation dependency on "Core - Windows" (core.window). That is, set a dependency and then specify that it is an implementation dependency, i.e., that you'll be using an internal class, not one of the official APIs. In one of your existing modules, or in a new one, make sure you have (in addition to the above) a dependency on Lookup API and Window System API. And then, add the class below to the module: import javax.swing.Action; import org.netbeans.core.windows.actions.ActionsFactory; import org.openide.util.lookup.ServiceProvider; import org.openide.windows.Mode; import org.openide.windows.TopComponent; @ServiceProvider(service = ActionsFactory.class) public class EmptyActionsFactory extends ActionsFactory { @Override public Action[] createPopupActions(TopComponent tc, Action[] actions) { return new Action[]{}; } @Override public Action[] createPopupActions(Mode mode, Action[] actions) { return new Action[]{}; } } Hurray. Farewell to superfluous popup menu items on your window tabs. In the screenshot below, the tab of the Projects window is being right-clicked and no popup menu items are shown, which is true for all the other windows, those that are predefined as well as those that you add afterwards:

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >