Search Results

Search found 10538 results on 422 pages for 'push technology'.

Page 88/422 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Mercurial hook fails on Windows

    - by Nick Hodges
    I am trying to use the headcount hook (https://bitbucket.org/dgc/headcount/overview) with my main develop repository. I pulled the code and placed it in C:\Python26\Lib\site-packages. I made the following entries into my hgrc file: [hooks] pretxnchangegroup.headcount = python:headcount.headcount.hook [headcount] push_ok = * commit_ok = * warnmsg = %(headcount)d new heads detected. You may not push new heads to this repository. debug = False All this is as per the install instructions. I then cloned the repository, created a branch, committed a change to that branch, and then issued: hg push -f as a test. However, this fails with: C:\junk\htmlwriter>hg push -f pushing to c:\code\htmlwriter searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files transaction abort! rollback completed abort: pretxnchangegroup.headcount hook is invalid (import of "headcount.headcou nt" failed) I then ran this: C:\Python26>python c:\Python26\Lib\site-packages\headcount\headcount.py Traceback (most recent call last): File "c:\Python26\Lib\site-packages\headcount\headcount.py", line 2, in <modul e> import mercurial.node ImportError: No module named mercurial.node I'm far from a python expert, so can someone help me figure out how to get the headcount hook to run inside my mercurial environment? Details: Windows 7, Mercurial 1.7.2, TortoiseHg 1.1.7

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • ! Extra }, or forgotten \endgroup. latex

    - by gzou
    hey, I met these latex format problem, anyone can offer some help? the .tex file: \begin{table}{} \renewcommand{\arraystretch}{1.1} \caption{Cambridge Flow feature definition and description} \label{cambridge-feature}} \centering \begin{tabular}{|c|c|} \hline\bfseries Abbreviation &\bfseries Description\\ \hline serv-port & Server port\\ \hline clnt-port & Client port\\ \hline push-pkts-serv & count of all packets with\\ & push bit set in TCP header (server to client)\\ \hline init-win-bytes-clnt & the total number of bytes \\ & sent in initial window (client to server)\\ \hline init-win-bytes-serv & the total number of bytes sent\\ & in initial window (server to client)\\ \hline avg-seg-size-clnt & average segment size: \\ & data bytes devided by number of packets\\ \hline IP-bytes-med-clnt & median of total bytes in IP packet\\ \hline act-data-pkt-serv & count of packet with at least one byte \\ & of TCP data playload (server to client)\\ \hline data-bytes-var-clnt & variance of total \\ & bytes in packets (client to server)\\ \hline min-seg-size-serv & minimum segment size \\ & observed (server to client)\\ \hline RTT-samples-serv & total number of RTT samples\\ & found (server to client),\\ & {\bf see also \cite{Moore05discriminators}}\\ \hline push-pkts-clnt & count of all packets with push bit set \\ & in TCP header (server to client)\\ \hline \end{tabular} \end{table} and the error message: ! Extra }, or forgotten \endgroup. \@endfloatbox ...pagefalse \outer@nobreak \egroup \color@endbox l.892 \end{table} I've deleted a group-closing symbol because it seems to be spurious, as in $x}$'. But perhaps the } is legitimate and you forgot something else, as in\hbox{$x}'. In such cases the way to recover is to insert both the forgotten and the deleted material, e.g., by typing `I$}'. there is no $ in my table, also this { are matching with the }, and also after I comment the citation, the error remains. anyone can offer help? really appreciate all the comments! ! Extra }, or forgotten \endgroup.

    Read the article

  • How can you do Co-routines using C#?

    - by WeNeedAnswers
    In python the yield keyword can be used in both push and pull contexts, I know how to do the pull context in c# but how would I achieve the push. I post the code I am trying to replicate in c# from python: def coroutine(func): def start(*args,**kwargs): cr = func(*args,**kwargs) cr.next() return cr return start @coroutine def grep(pattern): print "Looking for %s" % pattern try: while True: line = (yield) if pattern in line: print line, except GeneratorExit: print "Going away. Goodbye"

    Read the article

  • calling and killing a parent function with onmouseover and onmouseout events

    - by Zoolu
    I want to call the function upon the onmouseover="ParentFunction();" then kill it onmouseout="killParent();". Note: in my code the parent function is called initiate(); and the killer function is called reset(); which lies outside the parent function at the bottom of the script. I don't know how to kill the intitiate() function my first guess was: var reset = function(){ return initiate(); }; here's my source code: any suggestions and help appreciated. <!doctype html> <html> <head> <title> function/event prototype </title> <link rel="stylesheet" type="text/css" href="styling.css" /> </head> <body> <h2> <em>Fantastical place<br/>prototype</em> </h2> <div id="button-container"> <div id="button-box"> <button id="activate" onmouseover="initiate()" onmouseout="reset();" width="50px" height="50px" title="Activate"> </button> </div> <div id="text-box"> </div> </div> <div id="container"> <canvas id="playground" width="200px" height="250px"> </canvas> <canvas id="face" width="400px" height="200px"> </canvas> <!-- <div id="clear"> </div> --> </div> <script> alert("Welcome, there are x entries as of" +""+new Date().getHours()); //global scope var i=0; var c1 = []; //c is short for collect var c2 = []; var c3 = []; var c4 = []; var c5 = []; var c6 = []; var initiate = function(){ //the button that triggers the program var timer = setInterval(function(){clock()},90); //copy this block for ref. function clock(){ i+=1; var a = Math.round(Math.random()*200); var b = Math.round(Math.random()*250); var c = Math.round(Math.random()*200); var d = Math.round(Math.random()*250); var e = Math.round(Math.random()*200); var f = Math.round(Math.random()*250); c1.push(a); c2.push(b); c3.push(c); c4.push(d); c5.push(e); c6.push(f); // document.write(i); var c = document.getElementById("playground"); var ctx = c.getContext("2d"); ctx.beginPath(); ctx.moveTo(c3[i-2], c4[i-2]); ctx.bezierCurveTo(c1[i-2],c2[i-2],c5[i-2],c6[i-2],c3[i-1], c4[i-1]); // ctx.lineTo(c3[i-1], c4[i-1]); if(a<200){ ctx.strokeStyle="#FF33CC"; } else if(a<400){ ctx.strokeStyle="#FF33aa"; } else{ ctx.strokeStyle="#FF3388"; } ctx.stroke(); document.getElementById("text-box").innerHTML=i+"<p>Thoughts.</p>"; if(i===20){ //alert("15 reached"); clearInterval(timer);//to clearInterval must be using a global scoped variable. return; } }; //end of clock //setInterval(clock,150); var targetFace = document.getElementById("face"); var face = targetFace.getContext("2d"); var faceTimer = setInterval(function(){faceAnim()},80); //copy this block for ref. global scoped. function faceAnim(){ face.beginPath(); face.strokeStyle="#FF33CC"; face.moveTo(100,104); //eye line face.bezierCurveTo(150,125,250,125,300,104); face.moveTo(200,1); //centre line face.lineTo(200,400); face.moveTo(125,111);//left eye lid face.bezierCurveTo(160,135,170,130,185,120); face.moveTo(150,116);//left eye face.bezierCurveTo(155,125,165,125,170,118); face.moveTo(275,111);//right eye lid face.bezierCurveTo(240,135,230,130,215,120); face.moveTo(250,116);//right eye face.bezierCurveTo(245,125,235,125,230,118); face.moveTo(195, 118); //left nose face.lineTo(190, 160); face.lineTo(200,170); face.moveTo(190,160); //left nostroll face.lineTo(180,160); face.lineTo(191,154); face.moveTo(180,160); //left lower nostrol face.lineTo(200,170); face.moveTo(205, 118); //right nose face.lineTo(210, 160); face.lineTo(200,170); face.moveTo(210,160); //right nostroll face.lineTo(220,160); face.lineTo(209,154); face.moveTo(220,160); //right lower nostrol face.lineTo(200,170); face.moveTo(200,140); //outer triad face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 140); face.moveTo(200,145); //outer triad drop shadow face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 145); face.moveTo(200,130); //inner triad face.lineTo(180, 105); face.lineTo(220, 105); face.lineTo(200, 130); //face.lineWidth =0.6; face.moveTo(280,111);//outer right eye lid face.bezierCurveTo(240,140,230,135,210,120); face.moveTo(120,111);//outer left eye lid face.bezierCurveTo(160,140,170,135,190,120); face.moveTo(162,174); //upper mouth line face.bezierCurveTo(170,180,230,180,238,174); face.moveTo(165,175); //mouth line bottom face.bezierCurveTo(190,Math.floor(Math.random()*25+180),210,Math.floor(Math.random()*25+180),235,175); face.moveTo(232,204); //head shape face.lineTo(340, 20); face.moveTo(168,204); //head shape face.lineTo(60, 20); face.stroke(); //exicute all co-ords. }; //end of face anim var clearFace = function(){ document.getElementById('face').getContext('2d').clearRect(0, 0, 700, 750); }; setInterval(clearFace,90); }; //end of parent function var reset = function(){ document.getElementById('playground').getContext('2d').clearRect(0, 0, 700, 750); //clearInterval(faceTimer); //delete initiate(); }; </script> </body> </html>

    Read the article

  • Is this good code? Linked List Stack Implementation

    - by Quik Tester
    I have used the following code for a stack implementation. The top keeps track of the topmost node of the stack. Now since top is a data member of the node function, each node created will have a top member, which ideally we wouldn't want. Firstly, is this good approach to coding? Secondly, will making top as static make it a better coding practice? Or should I have a global declaration of top? #include<iostream> using namespace std; class node { int data; node *top; node *link; public: node() { top=NULL; link=NULL; } void push(int x) { node *n=new node; n->data=x; n->link=top; top=n; cout<<"Pushed "<<n->data<<endl; } void pop() { node *n=new node; n=top; top=top->link; n->link=NULL; cout<<"Popped "<<n->data<<endl; delete n; } void print() { node *n=new node; n=top; while(n!=NULL) { cout<<n->data<<endl; n=n->link; } delete n; } }; int main() { node stack; stack.push(5); stack.push(7); stack.push(9); stack.pop(); stack.print(); } Any other suggestions welcome. I have also seen codes where there are two classes, where the second one has the top member. What about this? Thanks. :)

    Read the article

  • MBR Booting from DOS

    - by eflukx
    For a project I would like to invoke the MBR on the first harddisk directly from DOS. I've written a small assembler program that loads the MBR in memory at 0:7c00h an does a far jump to it. I've put my util on a bootable floppy. The disk (HD0, 0x80) i'm trying to boot has a TrueCrypt boot loader on it. It shows up the TrueCrypt screen, but after typing in the password it crashes the system. When I run my little utlility (w00t.com) on a normal WinXP machine it seams to crash immedealty. Apparently I'm forgetting some crucial stuff the BIOS normally does, my guess is it's something trivial. Can someone with better bare-metal DOS and BIOS experience help me out? Heres my code: .MODEL tiny .386 _TEXT SEGMENT USE16 INCLUDE BootDefs.i ORG 100h start: ; http://vxheavens.com/lib/vbw05.html ; Before DOS has booted the BIOS stores the amount of usable lower memory ; in a word located at 0:413h in memory. We going to erase this value because ; we have booted dos before loading the bootsector, and dos is fat (and ugly). ; fake free memory ;push ds ;push 0 ;pop ds ;mov ax, TC_BOOT_LOADER_SEGMENT / 1024 * 16 + TC_BOOT_MEMORY_REQUIRED ;mov word ptr ds:[413h], ax ;ax = memory in K ;pop ds ;lea si, memory_patched_msg ;call print ;mov ax, cs mov ax, 0 mov es, ax ; read first sector to es:7c00h (== cs:7c00) mov dl, 80h mov cl, 1 mov al, 1 mov bx, 7c00h ;load sector to es:bx call read_sectors lea si, mbr_loaded_msg call print lea si, jmp_to_mbr_msg call print ;Set BIOS default values in environment cli mov dl, 80h ;(drive C) xor ax, ax mov ds, ax mov es, ax mov ss, ax mov sp, 0ffffh sti push es push 7c00h retf ;Jump to MBR code at 0:7c00h ; Print string print: xor bx, bx mov ah, 0eh cld @@: lodsb test al, al jz print_end int 10h jmp @B print_end: ret ; Read sectors of the first cylinder read_sectors: mov ch, 0 ; Cylinder mov dh, 0 ; Head ; DL = drive number passed from BIOS mov ah, 2 int 13h jnc read_ok lea si, disk_error_msg call print read_ok: ret memory_patched_msg db 'Memory patched', 13, 10, 7, 0 mbr_loaded_msg db 'MBR loaded', 13, 10, 7, 0 jmp_to_mbr_msg db 'Jumping to MBR code', 13, 10, 7, 0 disk_error_msg db 'Disk error', 13, 10, 7, 0 _TEXT ENDS END start

    Read the article

  • github like workflow on private server over ssh

    - by Jesse
    I have an server (available via ssh) on the internet that my friend and I use for working on projects together. We have started using git for source control. Our setup currently is as follows: Friend created repository on server with git init named project.friend.git I cloned project.friend.git on server to project.jesse.git I then cloned project.jesse.git on server to my local machine using git clone jesse@server:/git_repos/project.jesse.git I work on my local machine and commit to the local machine. When I want to push my changes to the project.jesse.git on server I use git push origin master. My friend is working on project.friend.git. When I want to get his changes I do pull jesse@server:/git_repos/project.friend.git. Everything seems to be working fine, however, I am now getting the following error when I do git push origin master: localpc:project.jesse jesse$ git push origin master Counting objects: 100, done. Delta compression using up to 2 threads. Compressing objects: 100% (76/76), done. Writing objects: 100% (76/76), 15.98 KiB, done. Total 76 (delta 50), reused 0 (delta 0) warning: updating the current branch warning: Updating the currently checked out branch may cause confusion, warning: as the index and work tree do not reflect changes that are in HEAD. warning: As a result, you may see the changes you just pushed into it warning: reverted when you run 'git diff' over there, and you may want warning: to run 'git reset --hard' before starting to work to recover. warning: warning: You can set 'receive.denyCurrentBranch' configuration variable to warning: 'refuse' in the remote repository to forbid pushing into its warning: current branch. warning: To allow pushing into the current branch, you can set it to 'ignore'; warning: but this is not recommended unless you arranged to update its work warning: tree to match what you pushed in some other way. warning: warning: To squelch this message, you can set it to 'warn'. warning: warning: Note that the default will change in a future version of git warning: to refuse updating the current branch unless you have the warning: configuration variable set to either 'ignore' or 'warn'. To jesse@server:/git_repos/project.jesse.git c455cb7..e9ec677 master -> master Is this warning anything I need to be worried about? Like I said, everything seems to be working. My friend is able to pull my changes in from my branch. I have the clone on the server so he can access it since he does not have access to my local machine. Is there something that could be done better? Thanks!

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • Unit finalization order for application, compiled with run-time packages?

    - by Alexander
    I need to execute my code after finalization of SysUtils unit. I've placed my code in separate unit and included it first in uses clause of dpr-file, like this: project Project1; uses MyUnit, // <- my separate unit SysUtils, Classes, SomeOtherUnits; procedure Test; begin // end; begin SetProc(Test); end. MyUnit looks like this: unit MyUnit; interface procedure SetProc(AProc: TProcedure); implementation var Test: TProcedure; procedure SetProc(AProc: TProcedure); begin Test := AProc; end; initialization finalization Test; end. Note that MyUnit doesn't have any uses. This is usual Windows exe, no console, without forms and compiled with default run-time packages. MyUnit is not part of any package (but I've tried to use it from package too). I expect that finalization section of MyUnit will be executed after finalization section of SysUtils. This is what Delphi's help tells me. However, this is not always the case. I have 2 test apps, which differs a bit by code in Test routine/dpr-file and units, listed in uses. MyUnit, however, is listed first in all cases. One application is run as expected: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization - MyUnit's finalization - ...other units... But the second is not. MyUnit's finalization is invoked before SysUtils's finalization. The actual call chain looks like this: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization (skipped) - MyUnit's finalization - ...other units... - SysUtils's finalization (executed) Both projects have very similar settings. I tried a lot to remove/minimize their differences, but I still do not see a reason for this behaviour. I've tried to debug this and found out that: it seems that every unit have some kind of reference counting. And it seems that InitTable contains multiply references to the same unit. When SysUtils's finalization section is called first time - it change reference counter and do nothing. Then MyUnit's finalization is executed. And then SysUtils is called again, but this time ref-count reaches zero and finalization section is executed: Finalization: // SysUtils' finalization 5003B3F0 55 push ebp // here and below is some form of stub 5003B3F1 8BEC mov ebp,esp 5003B3F3 33C0 xor eax,eax 5003B3F5 55 push ebp 5003B3F6 688EB50350 push $5003b58e 5003B3FB 64FF30 push dword ptr fs:[eax] 5003B3FE 648920 mov fs:[eax],esp 5003B401 FF05DCAD1150 inc dword ptr [$5011addc] // here: some sort of reference counter 5003B407 0F8573010000 jnz $5003b580 // <- this jump skips execution of finalization for first call 5003B40D B8CC4D0350 mov eax,$50034dcc // here and below is actual SysUtils' finalization section ... Can anyone can shred light on this issue? Am I missing something?

    Read the article

  • Placing & deleting element(s) from a object (stack)

    - by Chris
    Hello, Initialising 2 stack objects: Stack s1 = new Stack(), s2 = new Stack(); s1 = 0 0 0 0 0 0 0 0 0 0 (array of 10 elements wich is empty to start with) top:0 const int Rows = 10; int[] Table = new int[Rows]; public void TableStack(int[] Table) { for (int i=0; i < Table.Length; i++) { } } My question is how exactly do i place a element on a stack (push) or take a element from the stack (pop) as the following: Push: s1.Push(5); // s1 = 5 0 0 0 0 0 0 0 0 0 (top:1) s1.Push(9); // s1 = 5 9 0 0 0 0 0 0 0 0 (top:2) Pop: int number = s1.Pop(); // s1 = 5 0 0 0 0 0 0 0 0 0 0 (top:1) 9 got removed Do i have to use get & set, and if so how exactly do i implent this with a array? Not really sure what to exactly use for this. Any hints highly appreciated. The program uses the following driver to test the Stack class (wich cannot be changed or modified): public void ExecuteProgram() { Console.Title = "StackDemo"; Stack s1 = new Stack(), s2 = new Stack(); ShowStack(s1, "s1"); ShowStack(s2, "s2"); Console.WriteLine(); int getal = TryPop(s1); ShowStack(s1, "s1"); TryPush(s2, 17); ShowStack(s2, "s2"); TryPush(s2, -8); ShowStack(s2, "s2"); TryPush(s2, 59); ShowStack(s2, "s2"); Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s1, 2 * i); ShowStack(s1, "s1"); } Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s2, i * i); ShowStack(s2, "s2"); } Console.WriteLine(); for (int i = 1; i <= 6; i++) { getal = TryPop(s2); //use number ShowStack(s2, "s2"); } }/*ExecuteProgram*/ Regards.

    Read the article

  • Heroku deployment: connection refused

    - by Toby Hede
    I have suddenly run into an issue deploying to Heroku. I created a new app, went to push and now see: ssh: connect to host heroku.com port 22: Connection refused My other previously working Heroku apps no longer work, receiving the same error. Other Heroku commands work (create, info, db:push). I can SSH to other services, so it doesn't look like it's my machine. Any ideas?

    Read the article

  • when/where should properties be released once the objects is pushed for display?

    - by Tzur Gazit
    Hi, I've defined a view controller with an array as one of its properties, and set the array with an allocated and autoreleased array. After I push the view for display I release it. By watching the leaks tool I see that every time that I pop the view I suffer from leakage. I tried to release the properties explicitly, immediately after the push but the app crashes. looking forward for your suggestions.

    Read the article

  • When does post-update get called (GIT)

    - by Michael
    My git setup was working beautifully, then stopped cooperating today. Right now, I can git-pull, git commit just fine from my local machine. When I git push, the push goes through, but the server files don't actually get updated. Now I wonder, is my post-update file even getting called anymore? I've checked the permissions, and they are all group +x, and I'm a part of the group... any other suggestions?

    Read the article

  • Javascript - dynamically add input fields

    - by Neeraj
    Hi Guys, I have a code to add input fields dynamically in js. But the problem is if i add 3 fields or more and then browse a file(if the input field is file), the value of the field selected disappears. Can any one help Heres my code Thanks in advance. :) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Untitled Document</title> <script type="text/javascript"> <!-- Begin /* This script and many more are available free online at The JavaScript Source!! http://javascript.internet.com Created by: Husay :: http://www.communitxt.net */ var arrInput = new Array(0); var arrInputValue = new Array(0); fields = 1; maxInput = 4; function addInput() { //arrInput.push(createInput(arrInput.length)); if(fields <= maxInput){ fields +=1; arrInput.push(arrInput.length); //arrInputValue.push(arrInputValue.length); arrInputValue.push(""); display(); } } function display() { document.getElementById('parah').innerHTML=""; for (intI=0;intI<arrInput.length;intI++) { document.getElementById('parah').innerHTML+=createInput(arrInput[intI], arrInputValue[intI]); } } function saveValue(intId,strValue) { arrInputValue[intId]=strValue; } function createInput(id,value) { return "<input type='file' id='test "+ id +"' onChange='javascript:saveValue("+ id +",this.value)' value='"+ value +"'><br>"; } function deleteInput() { if (arrInput.length > 0) { fields -=1; arrInput.pop(); arrInputValue.pop(); } display(); } // End --> </script> </head> <body> <a href="javascript:addInput()">Add more input field(s)</a><br> <a href="javascript:deleteInput()">Remove field(s)</a><br> <input type="file" /><br> <input type="file" /><br> <input type="file" /><br> <p id="parah"></p> </body> </html>

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >