Search Results

Search found 395 results on 16 pages for 'truth'.

Page 14/16 | < Previous Page | 10 11 12 13 14 15 16  | Next Page >

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • Exitus Acta Probat: The Post-Processing Module

    - by Phil Factor
    Sometimes, one has to make certain ethical compromises to ensure the success of a corporate IT project. Exitus Acta Probat (literally 'the result validates the deeds' meaning that the ends justify the means)It was a while back, whilst working as a Technical Architect for a well-known international company, that I was given the task of designing the architecture of a rather specialized accounting system. We'd tried an off-the-shelf (OTS) Windows-based solution which crashed with dispiriting regularity, and didn't quite do what the business required. After a great deal of research and planning, we commissioned a Unux-based system that used X-terminals for the desktops of  the participating staff. X terminals are now obsolete, but were then hot stuff; stripped-down Unix workstations that provided client GUIs for networked applications long before the days of AJAX, Flash, Air and DHTML. I've never known a project go so smoothly: I'd been initially rather nervous about going the Unix route, believing then that  Unix programmers were excitable creatures who were prone to  indulge in role-play enactments of elves and wizards at the weekend, but the programmers I met from the company that did the work seemed to be rather donnish, earnest, people who quickly grasped our requirements and were faultlessly professional in their work.After thinking lofty thoughts for a while, there was considerable pummeling of keyboards by our suppliers, and a beautiful robust application was delivered to us ahead of dates.Soon, the department who had commissioned the work received shiny new X Terminals to replace their rather depressing lavatory-beige PCs. I modestly hung around as the application was commissioned and deployed to the department in order to receive the plaudits. They didn't come. Something was very wrong with the project. I couldn't put my finger on the problem, and the users weren't doing any more than desperately and futilely searching the application to find a fault with it.Many times in my life, I've come up against a predicament like this: The roll-out of an application goes wrong and you are hearing nothing that helps you to discern the cause but nit-*** noise. There is a limit to the emotional heat you can pack into a complaint about text being in the wrong font, or an input form being slightly cramped, but they tried their best. The answer is, of course, one that every IT executive should have tattooed prominently where they can read it in emergencies: In Vino Veritas (literally, 'in wine the truth', alcohol loosens the tongue. A roman proverb) It was time to slap the wallet and get the department down the pub with the tab in my name. It was an eye-watering investment, but hedged with an over-confident IT director who relished my discomfort. To cut a long story short, The real reason gushed out with the third round. We had deprived them of their PCs, which had been good for very little from the pure business perspective, but had provided them with many hours of happiness playing computer-based minesweeper and solitaire. There is no more agreeable way of passing away the interminable hours of wage-slavery than minesweeper or solitaire, and the employees had applauded the munificence of their employer who had provided them with the means to play it. I had, unthinkingly, deprived them of it.I held an emergency meeting with our suppliers the following day. I came over big with the notion that it was in their interests to provide a solution. They played it cool, probably knowing that it was my head on the block, not theirs. In the end, they came up with a compromise. they would temporarily descend from their lofty, cerebral stamping grounds  in order to write a server-based Minesweeper and Solitaire game for X Terminals, and install it in a concealed place within the system. We'd have to pay for it, though. I groaned. How could we do that? "Could we call it a 'post-processing module?" suggested their account executive.And so it came to pass. The application was a resounding success. Every now and then, the staff were able to indulge in some 'post-processing', with what turned out to be a very fine implementation of both minesweeper and solitaire. There were several refinements: A single click in a 'boss' button turned the games into what looked just like a financial spreadsheet.  They even threw in a multi-user version of Battleships. The extra payment for the post-processing module went through the change-control process without anyone untoward noticing, and peace once more descended. Only one thing niggles. Those games were good. Do they still survive, somewhere in a Linux library? If so, I'd like to claim a small part in their production.

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Do you know your ADF "grace period?"

    - by Chris Muir
    What does the term "support" mean to you in context of vendors such as Oracle giving your organization support with our products? Over the last few weeks I'm taken a straw poll to discuss this very question with customers, and I've received a wide array of answers much to my surprise (which I've paraphrased): "Support means my staff can access dedicated resources to assist them solve problems" "Support means I can call Oracle at anytime to request assistance" "Support means we can expect fixes and patches to bugs in Oracle software" The last expectation is the one I'd like to focus on in this post, keep it in mind while reading this blog. From Oracle's perspective as we're in the business of support, we in fact offer numerous services which are captured on the table in the following page. As the text under the table indicates, you should consult the relevant Oracle Lifetime Support brochures to understand the length of time Oracle will support Oracle products. As I'm a product manager for ADF that sits under the FMW tree of Oracle products, let's consider ADF in particular. The FMW brochure is found here. On page 8 and 9 you'll see the current "Application Development Framework 11gR1 (11.1.1.x)" and "Application Development Framework 11gR2 (11.1.2)" releases are supported out to 2017 for Extended Support. This timeframe is pretty standard for Oracle's current released products, though as new releases roll in we should see those dates extended. On page 8 of the PDF note the comment at the end of this page that refers to the Oracle Support document 209768.1: For more-detailed information on bug fix and patch release policies, please refer to the “Error Correction Support Policy” on MyOracle Support. This policy document is important as it introduces Oracle's Error Correction Support Policy which addresses "patches and fixes". You can find it attached the previous Oracle Support document 209768.1. Broadly speaking while Oracle does provide "generalized support" up to 2017 for ADF, the Error Correction Support Policy dictates when Oracle will provide "patches and fixes" for Oracle software, and this is where the concept of the "grace period" comes in. As Oracle releases different versions of Oracle software, say 11.1.1.4.0, you are fully supported for patches and fixes for that specific version. However when we release the next version, say 11.1.1.5.0, Oracle provides at minimum of 3 months to a maximum of 1 year "grace period" where we'll continue to provide patches and fixes for the previous version. This gives you time to move from 11.1.1.4.0 to 11.1.1.5.0 without being unsupported for patches and fixes. The last paragraph does generalize as I've attempted to highlight the concept of the grace period rather than the specific dates for any version. For specific ADF and FMW versions and their respective grace periods and when they terminated you must visit Oracle Support Note 1290894.1. I'd like to include a screenshot here of the relevant table from that Oracle Support Note but as it is will be frequently updated it's better I force you to visit that note. Be careful to heed the comment in the note: According to policy, the Grace Period has passed because a newer Patch Set has been released for more than a year. Its important to note that the Lifetime Support Policy and Error Correction Support Policy documents are the single source of truth, subject to change, and will provide exceptions when required. This My Oracle Support document is providing a summary of the Grace Period dates and time lines for planning purposes. So remember to return to the policy document for all definitions, note 1290894.1 is a summary only and not guaranteed to be up to date or correct. A last point from Oracle's perspective. Why doesn't Oracle provide patches and fixes for all releases as long as they're supported? Amongst other reasons, it's a matter of practicality. Consider JDeveloper 10.1.3 released in 2005. JDeveloper 10.1.3 is still currently supported to 2017, but since that version was released there has been just under 20 newer releases of JDeveloper. Now multiply that across all Oracle's products and imagine the number of releases Oracle would have to provide fixes and patches for, and maintain environments to test them, build them, staff to write them and more, it's simple beyond the capabilities of even a large software vendor like Oracle. So the "grace period" restricts that patches and fixes window to something manageable. In conclusion does the concept of the "grace period" matter to you? If you define support as "getting assistance from Oracle" then maybe not. But if patches and fixes are important to you, then you need to understand the "grace period" and operate within the bounds of Oracle's Error Correction Support Policy. Disclaimer: this blog post was written July 2012. Oracle Support policies do change from time to time so the emphasis is on you to double check the facts presented in this blog.

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • The Customer Experience Imperative: A Game Changer for Brands

    - by Jeri Kelley
    By Anthony Lye, SVP, Cloud Applications Strategy, Oracle We know that customer experience has emerged as a primary differentiator for businesses today.  I’ve talked a lot about the new age of the empowered consumer. At Oracle we’ve spent a lot of time developing technologies and practices that our customers can implement to greatly improve their customer experience strategies. Of course I’m biased, but I think that we have created a portfolio of the best solutions on the planet to help organizations deal with the challenges of providing great customer experiences. We’ve done this because we started to witness some trends over the last few years. As the average person began to utilize social and mobile technologies more frequently and products commoditized, customer experience truly remained the only sustainable differentiator for businesses.In fact, we have seen that customer experience is often driving the success or the failure of a product or a brand. And as end customers have become more vocal about their experiences with companies on social and mobile channels, they now have the power to decide which brands will win and which brands will lose. To address this customer experience imperative, I believe that business today must do three things really well:Connect with your customers. You have to connect with customers whenever, wherever and however they want. Organizations must provide a great experience on their existing channels— the call center, the brick and mortar store, the field sales organizations, the websites and social properties. Businesses must also be great at managing and delivering journeys on these channels, while quickly adapting to embrace the new channels that emerge. You have to understand mobile. You have to understand social. You have to understand kiosks. These are all new routes to market, new channels where your customers may or may not show up. You have to interact with them where they are. You have to present information in a way that's meaningful to them. As well as providing what we would call a multichannel experience. We have to recognize that customers may start their experience on one channel, but end it on a different channel. It’s important that an organization’s technology solutions enable, not just a multichannel strategy, but a strategy that can power new channels and create customer journeys that cross these channels.Get to know your customers. Next, companies need to get to know the customer as intimately as the customer will allow. Today most customer interactions are anonymous, but it’s important for brands to know which customers drive value. Customers want to provide feedback. They want to share their opinions, but they want to know that those opinions are being heard and acted upon. For this to occur, we need to know much more about the customer and then reward them for their loyalty and for their advocacy.Enable connections. The last thing is to enable people to connect or transact with your brand. We've got to make it really, really simple for customers to do business with us. We can't make them repeat the steps; we can't make them tell us their identity for the fifth time as they move between organizations. These silos can no longer sustain or deliver a good customer experience. It's extremely important that companies be where customers want them to be—that we create profitable journeys for us and for them.Organizations have to make sure that there is a single source of truth that defines the customer. We have to make sure that the technology applications that we rely on understand not just the dimensions of multichannel, but of cross-channel too. We have to enable social at the very core of the overall architecture. We have to use historical analytics, real-time decisioning as well as predictive analytics to help personalize and drive an experience. And these are all technologies that IT needs, that IT is familiar with, but needs to enable for the line of business that in turn can enable for the end customer.  This means that we've got to make our solutions available to the customers in the cloud.In this new age of the empowered consumer, businesses have to focus on delivery mechanisms that reduce the overall TCO, while driving a rapid rate of innovation and a more rapid rate of deployment. At the Oracle Customer Experience Summit @ OpenWorld, I’ll discuss these issues and more. I hope that you can join us for what promises to be an unforgettable experience.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Make Bluetooth on Android 2.1 discoverable indefinitely

    - by kanov-baekonfat
    Hello all. I'm working on a research project which involves Bluetooth and the Android OS. I need to make Bluetooth discoverable indefinitely in order for the project to continue. The Problem: Android limits discoverability to 300 seconds. I cannot ask the user every 300 seconds to turn discoverability back on as my application is designed to run in the background without disturbing the user. As far as I am aware, there is no way to increase the time though Android's GUI. Some sources have called this a safety feature, others have called this a bug. There may be a bit of truth in both... What I'm Trying / Have Tried: I'm trying to edit a stable release of cyanogenmod to turn the discoverability timer off (it's possible; there's a configuration file that needs to have a single number changed). This isn't working because I'm having verification problems with the resulting package. During the past week, I downloaded the cyanogenmod source code, changed a relevant class in the hope that it would make Bluetooth discoverable indefinitely, and tried to recompile. This did not work because (a) the repo is frequently changed, leading to an unstable code base which fails to compile (OR, it could be that I'm using it incorrectly; just because it looked like it was the code's fault in many instances doesn't mean I should blame it for all the problems I encountered!) and (b) the repo decides to periodically "ignore" me (but not always, as I have gotten the code base before!), replying to my synchronization/connection attempts with: fatal: The remote end hung up unexpectedly As you might imagine, the above two issues are problematic and very frustrating to deal with. More Info: I'm running Android 2.1 via cyanogenmod (v5 I believe). This means the phone is also rooted. I have a developer phone, which means that the bootloader is unlocked. My phone is an HTC Magic (32B). The Big Question: How can I make Bluetooth indefinitely discoverable on Android? Thanks for your time and input. I feel like I'm spinning my tires on this issue and I'd like to move past it.

    Read the article

  • Contract developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Corner Cases, Unexpected and Unusual Matlab

    - by Mikhail
    Over the years, reading others code, I encountered and collected some examples of Matlab syntax which can be at first unusual and counterintuitive. Please, feel free to comment or complement this list. I verified it r2006a. set([], 'Background:Color','red') Matlab is very forgiving sometimes. In this case, setting properties to an array of objects works also with nonsense properties, at least when the array is empty. myArray([1,round(end/2)]) This use of end keyword may seem unclean but is sometimes very handy instead of using length(myArray). any([]) ~= all([]) Surprisigly any([]) returns false and all([]) returns true. And I always thought that all is stronger then any. EDIT: with not empty argument all() returns true for a subset of values for which any() returns true (e.g. truth table). This means that any() false implies all() false. This simple rule is being violated by Matlab with [] as argument. Loren also blogged about it. Select(Range(ExcelComObj)) Procedural style COM object method dispatch. Do not wonder that exist('Select') returns zero! [myString, myCell] Matlab makes in this case an implicit cast of string variable myString to cell type {myString}. It works, also if I would not expect it to do so. [double(1.8), uint8(123)] => 2 123 Another cast example. Everybody would probably expect uint8 value being cast to double but Mathworks have another opinion. a = 5; b = a(); It looks silly but you can call a variable with round brackets. Actually it makes sense because this way you can execute a function given its handle. a = {'aa', 'bb' 'cc', 'dd'}; Surprsisingly this code neither returns a vector nor rises an error but defins matrix, using just code layout. It is probably a relict from ancient times. set(hobj, {'BackgroundColor','ForegroundColor'},{'red','blue'}) This code does what you probably expect it to do. That function set accepts a struct as its second argument is a known fact and makes sense, and this sintax is just a cell2struct away. Equvalence rules are sometimes unexpected at first. For example 'A'==65 returns true (although for C-experts it is self-evident). About which further unexpected/unusual Matlab features are you aware?

    Read the article

  • What does N years of experience with a language really mean?

    - by marcgg
    I've been looking at jobs descriptions since I'm graduating soon and looking for a job and what's always coming back - I'm not teaching you anything - is the "N years of experience in this language". It has been discussed in this question that if you work professionally with let's say Ruby for 2 years, but during these two years you also did some C# and PHP and were actually coding in Ruby 50% of the time. Do you say you have 1 year of experience in Ruby? 2 years? Another issue that hasn't been reviewed in the other post is for "non-professional experience". I'll give you a personal example: I've been working with Ruby on Rails since 2004 while at school. I did a lot of personal projects and school projects using this technology. I also used Rails in 2 6-month internships. Do I have 5 years of Rails experience (2004-now)? Do I have 1 year(2 internships)? Do I have nothing? I feel like I don't deserve the credit for 5 years, because the first years I wasn't working a lot with rails, but since last year I launched some websites and invested myself a lot in this technology and just saying 1 year doesn't really reflect how much I know the technology... Another example: I Learned C++ at school and did 1 big project with it (2-3 month of work and a semester of classes). I never used it in a company but I'd be able to be productive fairly quickly if I had to work on a C++ project and I have a good grasp of the concepts. Do I have no experience? 3 months? 6 months? ... something else? What I'm really trying to do is to find a way to present my skill set in a way that is compliant to what recruiters expect. I also don't want to end up at an interview that would go something like this... Recruiter (finding out the horrible truth): Oh but you said that you had 2 years of experience with this when you have none! / slaps me in the face / Me (in pain): Oh! The irony! Recruiter (yelling): Get out of my office / calls security, punches me in the throat /

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 2 of 2

    - by pinaldave
    This is the second part of the two part series of Practices for Software Startup Pluralsight Course. Please read the first part of this series over here. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik. Personal Learning Schedule After these three sessions it was 6:30 am and time to do my own blog.  But for the rest of the day, I kept thinking about the course, and wanted to go back and finish.  I was wishing that I had woken up at 3 am so I could finish all at one go.  All day long I was digesting what I had learned.  At 10 pm, after my daughter had gone to bed, I sighed on again.  I was not disappointed by the long wait.  As I mentioned before, Stephen has started four to six companies, and all of them are very successful today. Here is the video I promised yesterday – it discusses the importance of Right Sizing Your Startup. The Heartbeat of Startup – Technology Stephen has combined all technology knowledge into one 30 minute session.  He discussed  how to start your project, how to deal with opinions, and how to deal with multiple ideas – every start up has multiple directions it can go. He spent a lot of time emphasized deciding which direction to go and how to decide which will be the best for you.  He called it a continuous development cycle. One of the biggest hazards for a start-up company is one person deciding the direction the company will go, until down the road another team member announces that there is a glitch in their part of the work and that everyone will have to start over.  Even though a team of two or five people can move quickly, often the decision has gone too long and cannot be easily fixed.   Stephen used an example from his own life:  he was biased for one type of technology, and his teammate for another.  In the end they opted for his teammate’s  choice , and in the end it was a good decision, even though he was unfamiliar with that particular program.  He argues that technology should not be a barrier to progress, that you cannot rely on your experience only.  This really spoke to me because I am a big fan of SQL, but I know there is more out there, and I should be more open to it.  I give my thanks to Stephen, I learned something in this module besides startups. Money, Success and Epic Win! The longest, but most interesting, the module was funding your start-up.  You need to fund the start-up right at the very beginning, if not done right you will run into trouble.  The good news is that a few years ago start-ups required a lot more money – think millions of dollars – but now start-ups can get off the ground for thousands.  Stephen used an example of a company that years ago would have needed a million dollars, but today could be started for $600.  It is true that things have changed, but you still need money.  For $600 you can start small and add dynamically, as needed.  But the truth is that if you have $600, $6000, or $6 million, it will be spent.  Don’t think of it as trying to save money, think of it as investing in your future.   You will need money, and you will need to (quickly) decide what you do with the money: shares, stakeholders, investing in a team, hiring a CEO.  This is so important because once you have money and start the company, the company IS your money.  It is your biggest currency – having a percentage of ownership in the company.  Investors will want percentages as repayment for their investment, and they will want a say in the business as well.  You will have to decide how far you will dilute your shares, and how the company will be divided, if at all.  If you don’t plan in advance, you will find that after gaining three or four investors, suddenly you are the minority owner in your own dream.  You need to understand funding carefully.  This single module is worth all the money you would have spent on the whole course alone.  I encourage everyone to listen to this single module even if they don’t watch any of the others.     Press End to Start the Game – Exists! The final module is exit strategies.  You did all this work, dealt with all political and legal issues.  What are you going to get out of it? The answer is simple: money.  Maybe you want your company to be bought out, for you talent to bring you a profit.  You can sell the company to someone and still head it.  Many options are available.  You could sell and still work as an employee but no longer own the company.  There are many exit strategies.  This is where all your hard work comes into play.  It is important not to feel fooled at any step.  There are so many good ideas that end up in the garbage because of poor planning, so that if you find yourself successful, you don’t want to blow it at this step!  The exit is important.  I thought that this aspect of the course was completely unique, and I loved Stephen’s point of view.  I was lost deep in thought after this module ended.  I actually took two hours worth of notes on this section alone – and it was only a three hour course.  I am planning on attending this course one more time next week, just to catch up on all the small bits of wisdom I’m sure I missed. Thank you Stephen for bringing your real world experience with us!  I recommend that everyone attends this course, even if they don’t want to begin their own start-up company. It was indeed a long day for me. Do not forget to read part 1 of this story and attend course Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • You are probably NOT a SharePoint Development Expert if&hellip;

    - by Mark Rackley
    So, all you aspiring SharePoint experts out there (especially those of you who put “expert” in your resumes).  It’s time for a cold cool splash of reality. More than likely you are NOT an expert (I know I’m not). Yes, you may have some expertise in certain aspects in SharePoint (it’s questionable if I have THAT some days), but make sure you’ve got the basics down before you start throwing that word “expert” around. I know that it becomes frustrating to those looking to hire SharePoint people and having to sift through all the resumes of those who think very highly of themselves and their skills only to find those gaping holes in common best practices. I’m much more willing to hire a decent dev who KNOWS they are not an expert than to hire a decent+ dev who THINKS they are an expert.  So… I’ve compiled a small reality check for you SharePoint Devs. and a “red flag” check for those of you wishing to hire a SharePoint developer. If any of these apply to you, you are probably not a SharePoint Development Expert. You are not a SharePoint Development Expert if you manually copy your DLLs Seriously, I don’t care if you write the best code in the world. If you are manually copying files to each web front end you are NOT a SharePoint Development expert. Yes, I realize the admins are generally the ones who do the actual deployments, but if you don’t know how to create solution packages for your admins, you are going to end up doing more damage than good some day. There are TONS of tools out there to help generate deployable solutions for you. You have ZERO excuse. You are not a SharePoint Development expert if you can’t tell me the main artifacts of a solution package Directly related to the first one. If you don’t know what the Manifest, DDF, WSP, and Feature files are and how they are used in a solution package, you are NOT a SharePoint development expert. I’m not asking you to be able to write them all from scratch (heck, I can’t even do that), but you MUST know what they are and how to tweak them if necessary. You are not a SharePoint Development expert if you don’t know what a Content Type or a Site Column is You would be absolutely amazed at how many “Expert” SharePoint Developers have NEVER EVER created a Content Type or Site Column or even know what they are. I mean, why would you ever want to create those when you can just do everything as a custom list or custom field? right???? (that’s sarcasm). You also need to know how to package a Content Type and a Site Column into a deployable package by the way. You are not a SharePoint Development expert if you have not created at least one Web Part, Workflow, Timer Job, and Event Handler. If you haven’t written at least one of each, you don’t fully understand what they do or their limitations. Again, I expect NO ONE to be able to write these things blind. I think the last time I wrote an application from scratch without copying and pasting from another project I had done before was back in 1994? Seriously, coding is like a Sour Dough starter, you get it from someone else and keep adding to it. You are not a SharePoint Development expert if you don’t know how to properly dispose of objects Another biggie with zero excuse for getting it wrong. It is so well known that you must dispose of your SPWeb and SPSite objects that if you aren’t doing it then you are not an expert. Heck, if you utilize “using” when handling SPWeb and SPSite objects and don’t realize that it disposes of those objects for you, then you are not a SharePoint Development expert. You are not a SharePoint Development expert if you do not know how to properly elevate privileges Just one of those development basics that any decent SharePoint Developer has got to have down and understand how and why it’s used You are not a SharePoint Development expert if you don’t know all of the development options available to SharePoint and when they should be used Okay… so all you hard core .NET SharePoint dev geeks take a moment to listen. You may be the most top not SharePoint .NET developer in the world, but if you are opening Visual Studio to solve every problem in SharePoint, then you are NOT a SharePoint development expert. The SharePoint developer’s tool kit is growing every day with tools like Visual Studio, Data View Web Parts, XSL, jQuery, SPServices, etc. etc… If you don’t have the ability to at least recognize that “hey, you can basically do the same thing here but just dropping in Easy Tabs instead of writing some weird web part” then you are NOT a SharePoint Development expert AND you are doing a huge disservice to your clients and customers. You are probably NOT a SharePoint Development expert if you call yourself an Expert So, truth telling time. I’m not an expert. There, I said it. I feel so much better. Now, I realize the word “expert” has been used with my name before, but I am quick to point out that I KNOW the experts and know that they will help me if I need it, but I’m not an expert in all things SharePoint. The minute you take on that moniker you are setting yourself up for a fall. It’s too big, there’s too much to know, and there’s WAY too much you can do wrong. You are not a SharePoint Development expert if you are not involved in the community I expect to get the most flack for this one, but it’s always a huge red flag for me when someone says they are an expert and has ZERO knowledge of the SharePoint community. The SharePoint community is ABSOLUTELY CRITICAL to be an effective SharePoint developer, admin, architect, power user or whatever the heck you are!! The community keeps you sane, tells you when you are NOT using a best practice, recommends the best practice, and even knows when Microsoft is giving you the wrong information (*gasp* it does happen). If you can’t tell me who you are following on twitter, who's blog you read, what conferences you attend, or name the experts who you monitor to make sure you are not doing something stupid, then you are probably doing something stupid. Again, not asking you to be a speaker, blogger, or the least bit extroverted but you should be at LEAST stalking the experts. So… what’s the point? So… yeah… what’s my point in all this. Well, first of all let me point out that this is by far not a finished list and I could come up with a LOT more specific “deep dive” questions, but these should be high enough level that even non experts can recognize and ask them. If you have some common ones you run into let me know and add them in the comments below. Also, keep in mind I’m not saying you as a developer HAVE to know EVERYTHING, but you DO need to know what you don’t know and proudly and honestly state “I don’t know, but I’ll learn and find out”.  Those of us hiring SharePoint developers and know and have a passion for SharePoint are not looking for that elusive “expert” who knows everything. We are looking for someone who “gets it”, has a similar passion, great attitude, an understanding that they DON’T know everything, and a desire to do it right.  I would bet money that most SharePoint development disasters happen because of “experts” who think they know everything rather than the developer who is cautious and knows he doesn’t. Lastly, I know there’s a raging debate over what a “SharePoint Developer” is (I should know, as I keep bringing it up). So, obviously this blog post is more closely tied to the .NET side of SharePoint development and less towards the client side, middle tier, or whatever you want to call it. So, let’s please not get that argument going here as well…  Thanks

    Read the article

< Previous Page | 10 11 12 13 14 15 16  | Next Page >