Search Results

Search found 5123 results on 205 pages for 'functional dependencies'.

Page 71/205 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Twitte API for Java - Hello Twitter Servlet (TOTD #178)

    - by arungupta
    There are a few Twitter APIs for Java that allow you to integrate Twitter functionality in a Java application. This is yet another API, built using JAX-RS and Jersey stack. I started this effort earlier this year and kept delaying to share because wanted to provide a more comprehensive API. But I've delayed enough and releasing it as a work-in-progress. I'm happy to take contributions in order to evolve this API and make it complete, useful, and robust. Drop a comment on the blog if you are interested or ping me at @arungupta. How do you get started ? Just add the following to your "pom.xml": <dependency> <groupId>org.glassfish.samples</groupId> <artifactId>twitter-api</artifactId> <version>1.0-SNAPSHOT</version></dependency> The implementation of this API uses Jersey OAuth Filters for authentication with Twitter and so the following dependencies are required if any API that requires authentication, which is pretty much all the APIs ;-) <dependency> <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-client</artifactId>     <version>${jersey.version}</version> </dependency> <dependency>     <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-signature</artifactId>     <version>${jersey.version}</version> </dependency> Once the dependencies are added to your project, inject Twitter  API in your Servlet (or any other Java EE component) as: @Inject Twitter twitter; Here is a simple non-secure invocation of the API to get you started: SearchResults result = twitter.search("glassfish", SearchResults.class);for (SearchResultsTweet t : result.getResults()) { out.println(t.getText() + "<br/>");} This code returns the tweets that matches the query "glassfish". The source code for the complete project can be downloaded here. Download it, unzip, and mvn package will build the .war file. And then deploy it on GlassFish or any other Java EE 6 compliant application server! The source code for the API also acts as the javadocs and can be checked out from here. A more detailed sample using security and several other API from this library is coming soon!

    Read the article

  • How to install postgresql-8.4?

    - by ted
    sudo add-apt-repository ppa:pitti/postgresql # ... OK sudo apt-get install postgresql-8.4 Reading package lists... Done Building dependency tree Reading state information... Done Package postgresql-8.4 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'postgresql-8.4' has no installation candidate Should I download it from http://packages.debian.org/squeeze/postgresql-8.4 and manually install all the dependencies?

    Read the article

  • error installing python wrapper for openkinect

    - by auraham
    I tried to install python wrappers for OpenKinect on Ubuntu 12.04, but I can't due this error: $ sudo apt-get install python2.7-dev python2.7-dev : Depends: libexpat1-dev but it is not going to be installed Depends: libssl-dev but it is not going to be installed E: Unable to correct problems, you have held broken packages. Python wrapper requires these dependencies: Cython python-dev (error above) python-numpy how can I install python-dev?

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • What are the canonical problem sets or problem domains for the different types of languages?

    - by SnOrfus
    I'm just wondering what some of the canonical problem sets are for certain types of languages? ie. Fill in the blanks: __ is the perfect language to use for solving __. The question was arrived at reading or hearing people say statements like such and such module in our codebase would be much easier to implement using a functional language. Fee free to include examples that would seem obvious to you.

    Read the article

  • c# class naming standards/guidelines

    - by Ben
    Over the years I've used various naming conventions for services in my applications for example: [ClassName]Service [ClassName]Manager [ClassName]Factory [ClassName]Provider [ClassName]Helper I generally only use the "Helper" suffix for utility classes that have no external dependencies. However I find that there is a bit of a cross-over between the others, and wondered if there was any recommendations/standards/guidelines on what to use and when?

    Read the article

  • Does Huawei E398 LTE Modem works with Netgear Router?

    - by Paul Smith
    I've been using a Netgear router sharing the signal through my Huawei E160 dongle, but following the speeds update and the device limit, it does not satisfy the demand. Through research, I've found that Huawei E398 is the latest and functional dongle so far. I'm going to update, but the confusion comes: Is it compatible for my Netgear router? Will it work? Any information will be appreciated. I found the unlocked E398 here.

    Read the article

  • Software installation problem

    - by user91155
    I just installed Ubuntu today and I went to the Ubuntu software center to install Adobe Flash player. And when the install button is pressed, I received the following message: Package dependencies cannot be resolved This error could be caused by required additional software packages which are missing or not installable. Futhermore there could be a conflict between software packages which are not allowed to be installed at the same time. Please help me.

    Read the article

  • How to install monitorx?

    - by blade19899
    I followed a tutorial at unixmen on how to install monitorx in ubuntu i installed the dependencies and downloaded the monitorix_2.5.0-izzy1_all.deb deb package but when i try to install it i get this error google transtaler (i have a dutch Ubuntu and so was the error message) dpkg: error processing monitorix_2.5.0-izzy1_all.deb (- install): parsing file '/ var / lib / dpkg / tmp.ci / control' near line 14 package 'monitorix': blank line in the value of field 'Description' Errors were encountered while processing: monitorix_2.5.0-izzy1_all.deb my question is how to get monitorx up and running in Ubuntu 11.10 amd64

    Read the article

  • How do you avoid being a "blowhard"?

    - by Conrad Frix
    When I'm passionate about something (particularly programming) I find it really easy come off like the guy Peter G. was talking about in Dealing with the “programming blowhard”. So what techniques do you use to 1) Identify when you are indeed a blowhard? 2) Communicate something "important" without seeming self important? Specific example help like When giving criticism ask "have you considered what happens when XXX changes" instead of "never take dependencies on implementation details" When giving advice "showing with code is better than talking" or use a reference.

    Read the article

  • Three Highlights For Your Medical Practice Website

    Medical practice websites have can provide many benefits for your medical practice promotion efforts, depending on what they say and how they appear. That being the situation, an increasing trend among doctors practices is to promote themselves via a website on the internet. A well-designed and functional website gives a practice an online vehicle where it can promote the physicians' skills, as well as advertise their services.

    Read the article

  • ASP.NET web-application example for newbies

    - by A-Cube
    I want to learn ASP.NET web-application development by example. I want to learn it from an already developed web-application that is good as a tutorial for newbies. A fully functional web application that is small but powerful enough to teach newbies the development effort required for web-application development. I am looking for some application that is made using software engineering principles and not just a code written haphazardly.

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • Can't Install Wine, held Packages

    - by Joao Seara
    I am getting this error: The following packages have unmet dependencies: udev : Depends: libudev1 (= 204-5ubuntu20.2) but 204-5ubuntu20.4 is to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. I've tried to apt-get dist-upgrade and I got the same message: The following packages have been kept back: libudev1 udev 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. I am using ubuntu 64bits 14.04 Thank you guys!

    Read the article

  • Macaw DualLayout for SharePoint 2010 WCM released!

    - by svdoever
    A few months ago I wrote a blog post about the DualLayout component we developed for SharePoint Server 2010 WCM. DualLayout enables advanced web design on SharePoint WCM sites. See the blog post DualLayout - Complete HTML freedom in SharePoint Publishing sites! for background information. DualLayout if now available for download. Check out DualLayout for SharePoint 2010 WCM and download your fully functional trial copy! Enjoy the freedom!

    Read the article

  • Titanium SDK gives a "file too short" error

    - by Dananjaya
    I'm using Ubuntu 11.04 and recently installed Appcelerator Titanium Studio version 1.7 When I load up a demo project to run, I get an error like this, Couldn't load file:/home/dananjaya/.titanium/runtime/linux/1.1.0/libkhost.so, error: /home/dananjaya/.titanium/runtime/linux/1.1.0/libwebkittitanium-1.0.so.2: file too short Am I missing some dependencies here or is it a bug in the application? Thanks in advance.

    Read the article

  • How would you rank these programming skills in order of learning them? [closed]

    - by mumtaz
    As a general purpose programmer, what should you learn first and what should you learn later on? Here are some skills I wonder about... SQL Regular Expressions Multi-threading / Concurrency Functional Programming Graphics The mastery of your mother programming language's syntax/semantics/featureset The mastery of your base class framework libraries Version Control System Unit Testing XML Do you know other important ones? Please specify them... On which skills should I focus first?

    Read the article

  • White Paper: Internet Explorer 8 and the Security Development Lifecycle

    Creating a functional and more secure Web browser is a tremendous challenge that all browser vendors face. Learn how Microsoft has confronted this challenge by proactively embedding security into every stage of the Windows Internet Explorer 8 software engineering process with the Security Development Lifecycle (SDL)....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Working with Legacy code #3 : Build a safety net.

    - by andrewstopford
    The first port of call in changing legacy code is a safety net, without one your fingers will get burnt. Make your safety net a high level functional test over the major areas of the application. Automate the test, plug it into your CI builds and run it every night. The test should act as a final fail safe as you work.

    Read the article

  • What are the tools required to build a compiler?

    - by kevin
    What are the various tools that are required to build a compiler for a particular programming language, say C? I want to know how each part of the compiler works. So, I am trying to use all the existing tools like loader, linker, etc, and combine them together to build one compiler (or can just say "compiling a compiler"). Can any one list out all such tools that are required to build a fully functional one?

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >