Search Results

Search found 9353 results on 375 pages for 'implementation phase'.

Page 104/375 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Jagran Prakashan Increases Staff Productivity by 40%

    - by Michael Snow
    Jagran Prakashan Increases Staff Productivity by 40%, Launches New IT Projects up to 4x Faster, Enables Mobile Service, and Improves Business Agility Oracle Customer: JPL Location:  Uttar Pradesh, India Industry: Media and Entertainment Employees:  10,000 Annual Revenue:  $100 to $500 Million Jagran Prakashan Ltd. (JPL) is one of India's premier media and communications groups with interests spanning print, advertising, event management, and mobile services for weather, cricket scores, and educational activities. It is a major media enterprise, with 300 locations across 15 states. Its impressive stable of print publications includes Dainik Jagran, the world’s most widely read daily newspaper––with a readership of over 55 million––the country’s leading afternoon dailies, and a range of popular local, bilingual, and English language newspapers. JPL was using multiple systems to manage its business processes. Users were resistant to using multiple passwords for various applications, preferring to continue their less efficient, legacy work practices. In addition, there was no single repository for sharing documents across the organization, such as company announcements or project documents. The company relied on e-mail to disseminate up-to-date company information, often missing employees. It was also time-consuming and difficult for managers to track the status of ongoing assignments or projects because collaboration and document sharing was inefficient and ineffective.With diverse businesses and many geographic locations, JPL needed to implement a centralized and user-friendly enterprise portal to improve document sharing and collaboration and increase business agility. The company implemented Oracle WebCenter Portal to create a dynamic, secure, and intuitive self-service enterprise portal to improve the user experience and increase operating efficiency. It improved staff productivity by 40%, accelerated new IT projects by up to 4x, boosted staff morale, and increased business agility.   Increases Staff Productivity by 40%, Launches New Products up to 2x Faster A word from JPL "With Oracle WebCenter Portal, we gained a dynamic, secure, and intuitive self-service enterprise portal that provided an exceptional user experience and enabled us to engage employees in a collaborative environment. It increased IT staff productivity by 40%, delivered new projects up to 4x faster, and enabled mobile service to improve our business agility.” Sarbani Bhatia, Vice President IT, Jagran Prakashahn Ltd Before implementing Oracle WebCenter Portal, JPL stored project-critical information, such as page planning of daily newspaper editions and the launch of new editions or supplements on individual laptops or in the e-mail system. Collaboration between colleagues was limited to physical meetings, telephone discussions, and e-mail. It was difficult to trace and recover important project documents when a staff member resigned, which represented a significant risk to business continuity. Employees were also averse to multiple passwords and resisted using the systems, affecting staff productivity. With Oracle WebCenter Portal, JPL created a dynamic, secure, and intuitive self-service enterprise portal with business activity streams. The portal allowed users to navigate, discover, and access information, such as advertising rates, requisition approvals, ad-hoc queries, and employee surveys from a single entry point with a single password. Managers can also upload important documents, such as new pricing for advertisers or newspaper distributors, and share them through the information and instruction section in the portal. In addition, managers can now easily track and review timelines for projects online rather than gathering information from meetings and e-mails. The company gained the ability to centrally manage information, ensured business continuity, and improved staff productivity by 40%.“In the media industry, news has a very short shelf life, so speed is crucial. Information delayed is like information lost,” said Sarbani Bhatia, vice president IT, Jagran Prakashahn Ltd. “Thanks to Oracle WebCenter Portal’s contextual collaboration tools, we can provide and share feedback for new project launches, such as career or education supplements, up to 2x faster through discussion forums or knowledge groups. Tasks that previously required four months, we now complete in one month.”In addition, the company can broadcast announcements, flash employee birthdays, and promote important events through the message section on the webpage, instead of using the e-mail system. The company can also conduct opinion polls to gauge employee response to organizational issues and improve management decision-making.“With over 10,000 employees across 300 locations, it is critical for management to hear the voice of employees and develop a cohesive organizational culture. Oracle WebCenter Portal enables employees to engage with business processes and systems in a collaborative environment, providing users with an exceptional experience,” Bhatia said. Enables Mobility Access and Increases Business Agility Newspaper advertisements generate the majority of JPL’s revenue. With most sales staff on the move, the company needed to ensure timely approval of print advertisement discounts for specific clients and meet tight publication deadlines.  By integrating Oracle WebCenter Portal seamlessly with its enterprise resource planning (ERP) system and other applications, such as the organizational mass mailing system, business intelligence, and management information system, JPL embedded its approval workflow processes into the enterprise portal and provided users with an integrated and intuitive interface. About 30% of JPL’s sales staff members now have tablets and receive advertising discount approval from managers while in the field and no longer need to return to the office, which has significantly improved efficiency and increased business agility.“Application mobility was critical for sales representatives in the field to meet stringent auditing requirements for online accountability, particularly for our newspaper advertising business. Staff member satisfaction has improved significantly now that the sales team can use tablets to access the portal––a capability we will extend to smart phones in the second stage of the implementation,” Bhatia said. Accelerates Application Development by up to 4x and Cuts Costs by up to 60% With Oracle WebCenter Portal, users can easily create, modify, and upload information to their personalized webpages without IT assistance. By seamlessly integrating Oracle WebCenter Portal with the payroll database, managers can decide which members of their team can access the page and with whom they will share information, a decision based on role or geographical location. A sales representative selling advertising space for a local language daily newspaper, for example, can upload an updated advertising rate relevant only to that particular publication. Users can also easily adapt to the new platform, thanks to its intuitive design and look, reducing the need for training and lowering resistance to using the system.Using Oracle WebCenter Portal’s out-of-the-box reusable components, such as portal pages and templates, provided JPL’s developers with a comprehensive and flexible user experience platform and increased the speed of application development. In less than five months, JPL developed more than 55 workflows. The IT team accelerated deployment of new applications by up to 4x, as they do not need to install them on individual machines now that they have a web-based environment.   “Previously, we would have spent a whole day deploying a new application for each department or location. With a browser-based environment, we have cut costs by up to 60% by reducing deployment time to zero, because our IT team can roll out a new application from a single point, thanks to Oracle WebCenter Portal,” Bhatia said. Challenges Provide a dynamic, secure, and intuitive self-service enterprise portal to improve staff productivity and ensure business continuity Enable seamless integration with multiple enterprise applications to improve workflow efficiency—including approval of print advertisement discounts—and increase business agility Improve engagement with employees and enable collaboration to enhance management decision-making Accelerate time-to-market for new services, such as new advertising programs Solutions Oracle Product and ServicesOracle WebCenter Portal 11g Increased staff productivity by 40% and enhanced user satisfaction by enabling employees to easily navigate, discover, and access information from a single, self-service enterprise portal without IT assistance Launched new products, such as career or education supplements, up to 2x faster by enabling peer collaboration and incorporating feedback generated through discussion forums, thanks to Oracle WebCenter Portal’s out-of-the-box collaboration tools Accelerated application development up to 4x by enabling developers to optimize reusable components for managing and deploying new applications in a browser-based environment rather than spending one day to install applications for each department, cutting costs by up to 60% Ensured business continuity by enabling managers to easily track and review project timelines online rather than storing important documents on individual laptops or relying on the e-mail system Increased business agility and operational efficiency by seamlessly integrating with the in-house, ERP system and embedding business processes into a single portal Boosted company revenue by enabling sales team members to submit print-advertising discount requests through mobile devices instead of waiting to return to office, ensuring timely approval from managers to meet tight publication deadlines Improved management decision-making by enabling employees to easily share and access feedback through opinion polls or forums, boosting staff morale Introduced the single sign-on capability and enhanced security by enabling managers to decide access level for staff members based on role or geographical location Reduced the need for staff training and minimized user resistance to systems by providing a dynamic and intuitive user experience Why Oracle JPL did not consider other products because the company was already using Oracle Database, Enterprise Edition with Real Application Clusters and had a positive experience with Oracle. JPL chose Oracle WebCenter Portal to ensure no compatibility issues for integration with its existing Oracle products and to take advantage of the experience and support of a reputable vendor to ensure business continuity. “We chose Oracle because we knew we could rely on its support and experience. In addition, Oracle WebCenter Portal’s speed, agility, and mobile access features were a perfect fit for our business requirements,” Bhatia said. Implementation Process JPL launched the enterprise portal to 500 users in the first phase of the project, and plans to extend this to 2,000 users when the portal is fully launched. Oracle partner PricewaterhouseCoopers used Oracle Application Development Framework for the intial set-up, user training and to develop and design sample workflows. JPL’s internal IT staff then took charge of the implementation, bringing it to completion on budget. Partner Oracle PartnerPricewaterhouseCoopers (India)

    Read the article

  • Delivering Oracle DBaaS: Journey to Enterprise Cloud with Oracle Database 12c

    - by B R Clouse
    The release of Oracle Database 12c  is accompanied by extensive supporting collateral that details the new features and options of this major release.  But with so much to read and investigate, where to start?  If you don't have the time to pore through everything, then you may wish organize your reading in terms of the use case you're most interested in.  If your interest is Database as a Service in private database clouds then may I suggest that you start here: Accelerate the Journey to Enterprise Cloud with Oracle Database 12c Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This paper describes the phases of the journey to enterprise cloud, and enumerates the new features and options in Oracle Database 12c that support each phase.  Oracle Multitenant figures prominently, but it's not the only cloud-enabling topic: Oracle Database Quality of Service Management, Application Continuity, Automatic Data Optimization, Global Data Services and Active Data Guard Far Sync all deliver key benefits for delivering database as a service.  Further reading and research is suggested by the references included in the paper. Happy clouding!

    Read the article

  • Star Trek inspired home automation visualisation

    - by Zak McKracken
    I’ve always been a more or less active fan of Star Trek. During the construction phase of my house I started coding a GUI for controlling the house which has an EIB. Just for fun I designed a version inspired by the LCARS design used in Star Trek TNG and showed this to my wife. I showed her several designs before but this was the only one, she really liked. So I decided to go on with this. I started a C# WinForms application. The software runs on a wall mounted Shuttle Barebone-PC. First plan was an industrial panel-pc but the processor was too slow. The now-used Atom is ok. I started with the LCARS-controls found on Codeproject. Since the classic LCARS design divides the screen into two parts this tended to be impracticable, so I used my own design For now the software is able to: Switch lights/wall outlets Show current temperatures for all room controllers Show outside temperature with a 24h trend chart Show the status of the two heat pumps Provide an alarm clock (e.g. for cooking) Play internet radio streams Control absence Mute the door bell Speak status messages via speech synthesis For now, I’m working on an integration of my electric meter. The main heat pump and the electric meter are connected to my LAN. I also tried some speech recognition, but I’ve problems with the microphone. I't’s working when you are right in front of the PC, but not far away, let’s say on the other side of the room. So this is the main view. The table displays raw values which are sent over the EIB – completely useless but looks great For each floor I have a different view. Here you can see the temperatures and check the status of the lights (the buttons are blinking when a light is switched on) This is the view for the heat pump:   Next step would be to integrate a control of my squeezebox server (I use different Squeezeboxes through the house as a multiroom audio solution)

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Monitor network connectivity in WP7 apps

    - by Daniel Moth
    Most interesting Windows Phone apps rely on some network service for their functionality. So at some point in your app you may need to know programmatically if there is network connection available or not. For example, the Translator by Moth app relies on the Bing Translation service for translations. When a request for translation (text or voice) is made, the network call may fail. The failure reason is not evident from any of the return results, so I check the connection to see if it is present. Dependent on that, a different message is shown to the user. Before the translation phase is even reached, at the app start up time the Bing service is queried for its list of  languages; in that case I don't want to show the user a message and instead want to be notified when the network is available in order to send the query out again. So for those two requirements (which I imagine are common in other apps) I wrote a simple wrapper MyNetwork static class to the framework APIs: Call once MyNetwork.MonitorNetworkAvailability() method so it monitors the network change At any time check the MyNetwork.IsConnected property to check for network presence Subscribe to its MyNetwork.ConnectionEstablished event Optionally, during debugging use its MyNetwork.ChangeStatus method to simulate a change in network status As usual, there may be better ways to achieve this, but this class works perfectly for my scenarios. You can download the code here: MyNetworks.cs. Comments about this post welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Wednesday, August 15, 2012

    CodePlex Daily Summary for Wednesday, August 15, 2012Popular ReleasesTFS Workbench: TFS Workbench v2.2.0.10: Compiled installers for TFS Workbench 2.2.0.10 Bug Fix Fixed bug that stopped the change workspace action from working.MoltenMercury: MoltenMercury 1.0 beta 1: This is initial implementation of the MoltenMercury with everything running but not throughoutly tested. Release contains a zip file with program binaries and a zip file containing resources. Before using please create a new directory named data in the same directory with program executables and extract DefaultCharacter zip file in there. If you have ???????????, you can simply place executables into the same directory with the program mentioned above. MoltenMercury supports character resour...SharePoint Column & View Permission: SharePoint Column and View Permission v1.0: Version 1.0 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerSPListViewFilter: Version 1.6: Layout selection: http://blog.vitalyzhukov.ru/imagelibrary/120815/Settings_Layouts.pngConsoleHoster: ConsoleHoster Version 1.2: Cleanup some logging UI impreovements: Fixed the bug with _ character in project-name Fixed the bug with tab close button to be X style Fixed the style of search content bar Fixed the bug with disappearing Edit project window Moved the Clear History button to the taskbar and added also a menu item Changed the QC button to CH Applied project colors to tab headersMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.60: Allow for CSS3 grid-column and grid-row repeat syntax. Provide option for -analyze scope-report output to be in XML for easier programmatic processing; also allow for report to be saved to a separate output file.Diablo III Drop Statistics Service: 1.0: Client OnlyClosedXML - The easy way to OpenXML: ClosedXML 0.67.2: v0.67.2 Fix when copying conditional formats with relative formulas v0.67.1 Misc fixes to the conditional formats v0.67.0 Conditional formats now accept formulas. Major performance improvement when opening files with merged ranges. Misc fixes.Umbraco CMS: Umbraco 4.8.1: Whats newBug fixes: Fixed: When upgrading to 4.8.0, the database upgrade didn't run Update: unfortunately, upgrading with SQLCE is problematic, there's a workaround here: http://bit.ly/TEmMJN The changes to the <imaging> section in umbracoSettings.config caused errors when you didn't apply them during the upgrade. Defaults will now be used if any keys are missing Scheduled unpublishes now only unpublishes nodes set to published rather than newest Work item: 30937 - Fixed problem with Fi...MySqlBackup.NET - MySQL Backup Solution for C#, VB.NET, ASP.NET: MySqlBackup.NET 1.4.4 Beta: MySqlBackup.NET 1.4.4 beta Fix bug: If the target database's default character set is not UTF8, UTF8 character will be encoded wrongly during Import. Now, database default character set will be recorded into Dump File at line of "SET NAMES". During import(restore), MySqlBackup will again detect and use the target database default character char set. MySqlBackup.NET 1.4.2 beta Fix bug: MySqlConnection is not closed when AutoCloseConnection set to true after Export or Import completed/halted. M...patterns & practices - Unity: Unity 3.0 for .NET 4.5 and WinRT - Preview: The Unity 3.0.1208.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. This is an updated version of the port after the .NET Framework 4.5 and Windows 8 have RTM'ed. Please see the Release Notes Providing feedback Post your feedback on the Unity forum Submit and vote on new features for Unity on our Uservoice site.LiteBlog (MVC): LiteBlog 1.31: Features of this release Windows8 styled UI Namespace and code refactoring Resolved the deployment issues in the previous release Added documentation Help file Help file is HTML based built using SandCastle Help file works in all browsers except IE10Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 2.0.0 for VS11: Self-Tracking Entity Generator for WPF and Silverlight v 2.0.0 for Entity Framework 5.0 and Visual Studio 2012Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.6.0: New Stuff ImageTile Control - think People Tile MicrophoneRecorder - Coding4Fun.Phone.Audio GzipWebClient - Coding4Fun.Phone.Net Serialize - Coding4Fun.Phone.Storage this is code I've written countless times. JSON.net is another alternative ChatBubbleTextBox - Added in Hint TimeSpan languages added: Pl Bug Fixes RoundToggleButton - Enable Visual State not being respected OpacityToggleButton - Enable Visual State not being respected Prompts VS Crash fix for IsPrompt=true More...AssaultCube Reloaded: 2.5.2 Unnamed: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to pack it. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Added 3rd person Added mario jumps Fixed nextprimary code exploit ...NPOI: NPOI 2.0: New features a. Implement OpenXml4Net (same as System.Packaging from Microsoft). It supports both .NET 2.0 and .NET 4.0 b. Excel 2007 read/write library (NPOI.XSSF) c. Word 2007 read/write library(NPOI.XWPF) d. NPOI.SS namespace becomes the interface shared between XSSF and HSSF e. Load xlsx template and save as new xlsx file (partially supported) f. Diagonal line in cell both in xls and xlsx g. Support isRightToLeft and setRightToLeft on the common spreadsheet Sheet interface, as per existin...BugNET Issue Tracker: BugNET 1.1: This release includes bug fixes from the 1.0 release for email notifications, RSS feeds, and several other issues. Please see the change log for a full list of changes. http://support.bugnetproject.com/Projects/ReleaseNotes.aspx?pid=1&m=76 Upgrade Notes The following changes to the web.config in the profile section have occurred: Removed <add name="NotificationTypes" type="String" defaultValue="Email" customProviderData="NotificationTypes;nvarchar;255" />Added <add name="ReceiveEmailNotifi...????: ????2.0.5: 1、?????????????。RiP-Ripper & PG-Ripper: PG-Ripper 1.4.01: changes NEW: Added Support for Clipboard Function in Mono Version NEW: Added Support for "ImgBox.com" links FIXED: "PixHub.eu" links FIXED: "ImgChili.com" links FIXED: Kitty-Kats Forum loginPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 5): Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version Support for Windows 8 RTM ADDITIONAL DOWNLOADSSmooth Streaming Client SD...New ProjectsA Java Open Platform: A Java Open PlatformAutomation Tools: noneAzManPermissions: Allows AzMan (Authorization Manager) operation permission checks declaratively and imperatively. It can connect to AzMan stores directly or through a service.Configuration Manager 2012 Automation: Configuration Manager 2012 Automation is a powershell project to help perform the basic implementation of a CM12 infrastructure...Dynamicweb Blog Engine 2012: A simple blog engine built on Dynamicweb.E Lixo: Projeto E_LIXOEasy Windows Service Manager: Easy Windows Service Manager (ewsm) is a desktop utility that enables the easily management of Windows services. flexwork: A Flex Pluggable Extension FrameworkGit On Codeplex: Just trying git on CodePlexGT.BlogBox: Blogsite-Webtemplate for SharePoint 2010gvPoco: gvPoco is a .NET class library for the Google Visualization Charts API. IAllenSoft: IAllenSoft is a silverlight control library, which originates from some idea. Its target is to improve development productivity for silverlight projects. ListNetRanker: ListNetRanker is a listwise ranker based on machine learning. Given documents and query's feature set, ListNetRanker will rank these documents by ranking score.lvyao: stillMetro Tic-Tac-Toe: Tic-Tac-Toe is a Windows 8 developed using MonoGame for Windows 8. The intent of this code is to help XNA developers with porting their XNA apps to Windows 8myProject: my private code Nintemulator: Nintemulator is a work in progress multi-system emulator. Plans for emulated systems are NES, SNES, GB/GBC, GBA.Picnic Game: Picnic Game is a 2D game written in Small Basic. The objective is to get the pizza to the basket before you get stung or time runs out.Picnic Level Editor: Picnic Game is a 2D 3rd person game where you are an ant, and you have to gather the pizza and bring it to the basket before you get stung by a bee.ResCom: ResCom is a community response system for Windows Phone 8Scalable Object Persistence (SOP) Framework: Scalable object persistence (SOP) framework. RDBMS "index" like performance and scale on data Stores but without the unnecessary baggage, fine grained control.self-balancing-avl-tree: Self balancing AVL tree with Concat and Split operations.SEMS(synthetical evaluation management system): that's itsergionadal-mvc: Mvc para sistemas AndroidTFS Test Plan Builder: TFS Test plan builder is a tool to assis users of Microst Test Manager to build new test plans then moving from sprint to sprint or release to release The Excel Library: DExcelLibrary is a project that allows you to load and display "any" excel file given a specificaction of with areas/grids to loadTimePunch Metro Wpf Library: This library contains WPF controls and MVVM Implementation to create touch optimized applications in the new Windows 8 UI style formerly known as "Metro". UniPie: UniPie is a system wide pie menu system for sending key-mouse button combinations to certain applications. It can also convert one combo to another.xref: This is an extension of the classic FizzBuzz problem.ZombieRPG: Basic Zombie RPG game made using Game Maker??????: ZHJP

    Read the article

  • CodePlex Daily Summary for Thursday, September 13, 2012

    CodePlex Daily Summary for Thursday, September 13, 2012Popular ReleasesAustralia Income and Tax Calculator: Australia Income and Tax Calculator: first release, can calculate net income, tax, quarterly/monthly/weekly/daily and hourly taxable/net ratedatajs - JavaScript Library for data-centric web applications: datajs version 1.1.0-beta: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...SharePoint (2010) Farm Backup: PowerShell SharePoint (2010) Farm Backup v2.2: Version 2.2 Changelog - Added the ability to export Solutions (WSP) from solution gallery. - Added the ability to exclude MySites from the sites backup. - Added Is-Foundation method to determine whether SharePoint edition is Foundation, Standard or Enterprise to prevent errors when running script on SharePoint Foundation 2010 as Foundation does not have MySite functionality. - Added method to determine amount of storage required for sites backup. Script will now determine total required fo...Metadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.325.117): Add latest version of McTools.Xrm.Connection library to correct Office 365 authentication supportLakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)Microsoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...Arduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the visualmicro.com forum for more news and updates Version 1209.10 includes support for VS2012 and minor fixes for the Arduino debugger beta test team. Version 1208.19 is considered stable for visual studio 2010 and 2008. If you are upgrading from an older release of Visual Micro and encounter a problem then uninstall "Visual Micro for Arduino" using "Control Panel>Add and Remove Programs" and then run the install again. Key Features of 1209.10 Support for Visual Studio 2...Microsoft Script Explorer for Windows PowerShell: Script Explorer Reference Implementation(s): This download contains Source Code and Documentation for Script Explorer DB Reference Implementation. You can create your own provider and use it in Script Explorer. Refer to the documentation for more information. The source code is provided "as is" without any warranty. Read the Readme.txt file in the SourceCode.Social Network Importer for NodeXL: SocialNetImporter(v.1.5): This new version includes: - Fixed the "resource limit" bug caused by Facebook - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuAcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.MVC Controls Toolkit: Mvc Controls Toolkit 2.3: Added The new release is compatible with Mvc4 RTM. Support for handling Time Zones in dates. Specifically added helper methods to convert to UTC or local time all DateTimes contained in a model received by a controller, and helper methods to handle date only fileds. This together with a detailed documentation on how TimeZones are handled in all situations by the Asp.net Mvc framework, will contribute to mitigate the nightmare of dates and timezones. Multiple Templates, and more options to...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.00: Maintenance Release Changes on Metro7 06.02.00 Fixed width and height on the jQuery popup for the Editor. Navigation Provider changed to DDR menu Added menu files and scripts Changed skins to Doctype HTML Changed manifest to dnn6 manifest file Changed License to HTML view Fixed issue on Metro7/PinkTitle.ascx with double registering of the Actions Changed source folder structure and start folder, so the project works with the default DNN structure on developing Added VS 20...Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter New feature - Set a property's DefaultValueHandling to Ignore when EmitDefaultValue from DataMemberAttribute is false Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing deci...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Code Snippets for Windows Store Apps: Code Snippets for Windows Store Apps: First release of our snippets! For more information: Installation List of Snippets Minor update 9/13: Updated C# and VB packages -- Converted from VSI installers to ZIP files for easier usage with Visual Studio Express editions. Snippets contained in each package were not altered.New Projectsanother hello world: a very quick test.Atorpat Marquee: This is the advanced marquee pro moduleAustralia Income and Tax Calculator: Calculates australian net income, tax, ratesAuto generate C# DAL, BLL classes and Sql Store Procedures: This program helps to you for auto generate store procedures for Sql and DAL, BLL classes for C# without any extra code.BugSystem: bug systemBuild and Deploy Tool using BTDF: This tool can be used by a build and release manager who can prepare the BizTalk MSI and deploy the application in the corresponding environment. Calculation WebApplication: Calc web appChild&Family Brigade®: This Software, is specially realized for the Family Brigade of Cochabamba Bolivia, this is a nonprofit institution, that helps family's problems.Creative Style System: This is our Sheridan College Capstone project. Bitches.Customizable Process Guidance Content for VS ALM 2012: Customizable process guidance is provided for each of the default process templates that VS ALM TFS 2012 provides. DER_Autoit: 2012-9-13-14-10 ?????!Dynamics Xrm Application Speed Builder: The Dynamics Xrm Application Speed Builder will analyze databases, then create the entities in CRM, attributes, and forms for you. Feed Discovery: Want to subscribe to a web page and can't find the newsfeed? Just rightclick on the page and discover! Subscribe directly in IE, Google Reader or any other.Inmeta Tools for Visual Studio 2012 and TFS 2012: Info comingInventory Manager: Inventory Manager is a small demo project that lets you manage your items.Kayvon's Group: projectsLibreta: Something about LibretaMultiple Image choice custom field type: This solution contains "Custom Field Type" which allows the user to choose multiple images as a choice.PHP-Edin: Php kurs PROJETO PET: Um ambiente social para adoção e apreciação de animais.Read the Reader: Read the Reader is a lightweight Google Reader Client. It runs in the background and tells you, when something's happening.SharePoint 2010 File Recovery: A little utility program to allow you to easily recover files from your SharePoint 2010 content database backupsSistema para estudo do mvc: Estudando asp.mvcSports Center Asp.net MVC Demo: Sample Sports Center Asp.net MVC Project. Good start up kit for getting in to various feature of Asp.net MVC offering. testtom08092012git01: bvcT-SQL implementation of Standard Distribution PDF and CDF: Files for blog post at http://formaldev.blogspot.com/2012/09/T-SQL-NORMDIST-1.htmlWunderlist.com Shortcut Google Chrome Extension: Just exactly that, a shortcut to Wunderlist.com

    Read the article

  • Nginx routing script for NodeJS and Wordpress

    - by Nilay Parikh
    We are moving blogs and site from wordpress to nodejs and ready to move into production. However I'm not able to figure it out how to implement routing from front server (Nginx) to NodeJS (prefered web instance) and if data not synced yet into NodeJS website than (404 will throw by NodeJS) fall back to (using reverse proxy) to Wordpress and serve page, during the transformation period. Q1. Is the approach good for the scenario, or anyone can suggest better approach? Q2. Should NodeJS treat itself as Reverse proxy (using bouncy : https://github.com/substack/bouncy or similar package) in event of fall back or shoud stick with Nginx to do so using fastcgi approch. Both NodeJS and Wordpress are on single server only, In first scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then reverse query wordpress and serve content second scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then 404 to Nginx and Nginx script fallback to Wordpress (FastCGI PHP) Later we have plan to phase out Wordpress and PHP from the server environment completely. I'd like to see any examples of Nginx or Varnish scripts and/or NodeJS scripts if you have for me to refer. Thanks.

    Read the article

  • Which software to keep track of my project?

    - by Exa
    I'm about to start the first real phase of my game development which will consist of the acquisition of information, resources and the definition of where I want to go and what I will need for that. I just want to make sure that I'm prepared as best as possible before I actually start development. I don't like the thought of using Microsoft Word or Excel for my project management... I already worked with MS Project but I don't think it fits my needs. I need a software where I can easily maintain project steps, milestones, important issues, information about technologies and engines I use, as well as simple notes and thoughts I just want to write down. I usually prefer a whiteboard for stuff like that but unfortunately it's not a persistent way of storing. ;) Also writing it down the old-school way is something I can think of, but only for quick notes... Which software do you use for that? Are there commonly used programs? Is there any free software at all?

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Tell Us Once&ndash;Guardian Innovation Award Winner

    - by BizTalk Visionary
    Yesterday the Tell Us Once project received it’s latest accolade. My partner in crime in the execution of the delivery of software for this project, Mark Usher,  reports: It’s always great to receive recognition for the effort you put in when working on a project. It’s no secret that here at Solidsoft we are extremely proud of our association with the Government’s Tell Us Once (TUO) programme. Having already been selected by Microsoft as Worldwide Partner Conference (WPC) 2011 Award Winners for Application Integration, we are very pleased that the TUO programme as a whole has been recognised and has won the Guardian Newspaper’s Innovation Nation Award for Frontline Services (link to http://www.guardian.co.uk/innovation-nation-awards )  The TUO entry was judged the winner over three other shortlisted solutions from Dyfed Powys Police, North Yorkshire County Council and Staffordshire County Council. Innovation Nation is a partnership between Virgin Media Business and the Guardian, an initiative to uncover the most innovative businesses, public sector organisations and charities in the UK today.  Its aim is to showcase the ideas, the endeavour and the energy that are making things better in the areas of customer service, unique working practices, frontline government services and collaboration. Solidsoft have been involved with the Tell Us Once programme since its inception in 2007 and worked closely with the Department of Work and Pensions (DWP) to produce a business case for the programme. Teaming up with Atos (who host the application) Solidsoft delivered the first national solution in 2011 and a second phase in April 2012. Whilst currently restricted to distributing citizen data to central government organisations and local government authorities, DWP is now actively engaging with the private sector to see if TUO data can be disclosed to private sector organisations such as banks and building societies. Solidsoft welcome this expansion into the private sector where even more efficiencies will be realised. Mark Usher - Solidsoft Sales and Marketing Director For my part I’d like to say a big thank you to the Solidsoft Team, ATOS team and DWP team that made it happen.

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-10

    - by Bob Rhubart
    Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Series: How to Kill the Architecture Department? Part 1 | Xebia Blog Don't let the title fool you. This is not an anti-architecture post. Rather, this post, part 1 of a now four-part series, offers suggestions for preserving architecture in a form that better supports agile organizations. BPM Suite configure BAM Adapter | Peter Paul van der Beek "To have the BPM server push events to BAM – Business Activity Monitoring – we have to configure the BPM suite to use the BAM Adapter," says Peter Paul van de Beek. "The BAM Adapter is configured (like other SOA Suite and BPM Adapters) in the WebLogic Server Console." Peter Paul shows you how in this brief post. A case for not installing your own software | James Gentsch "I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for," says James Gentsch. "I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make 'well, it works here' a well forgotten phase for future software developers." Thought for the Day "I'm a strong believer in being minimalistic. Unless you actually are going to solve the general problem, don't try and put in place a framework for solving a specific one, because you don't know what that framework should look like." — Anders Hejlsberg Source: SoftwareQuotes.com`

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • Cloned partition not seen correctly by disk utility & gparted

    - by enrico
    Some days ago I cloned my /dev/sda1 partition with clonezilla in partition-to-partition mode to /dev/sda3. It worked, but now that I've finished the setup of system in /dev/sda3, I wanna reinstall /dev/sda1 for other stuffs. This partition is NOT mounted, but ubuntu's DISK UTILITY thinks it is, while it doesn't see as mounted the currently active / partition /dev/sda3. This is the df- TH output : Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 ext4 32G 6.2G 24G 21% / udev devtmpfs 2.2G 13k 2.2G 1% /dev tmpfs tmpfs 845M 906k 844M 1% /run none tmpfs 5.3M 0 5.3M 0% /run/lock none tmpfs 2.2G 115k 2.2G 1% /run/shm /dev/sdb1 fuseblk 321G 147G 174G 46% /Dati Gparted instead, sees /dev/sda1 as NOT mounted (checked with Information option), but it display the BOOT flag on this partition, while the real booted partition /dev/sda3, hasn't it. If I try to format the /dev/sda1 partition, it gives me this error : GParted 0.8.1 --enable-libparted-dmraid Libparted 2.3 Format /dev/sda1 as ext2 00:00:02 ( ERROR ) calibrate /dev/sda1 00:00:00 ( SUCCESS ) path: /dev/sda1 start: 2048 end: 62500863 size: 62498816 (29.80 GiB) set partition type on /dev/sda1 00:00:02 ( SUCCESS ) new partition type: ext2 create new ext2 file system 00:00:00 ( ERROR ) mkfs.ext2 -L "" /dev/sda1 mke2fs 1.41.14 (22-Dec-2010) /dev/sda1 is mounted; will not make a filesystem here! How is possible to correct this behaviour ? Is this due to some lacking option in clonezilla phase ? TIA Enrico

    Read the article

  • Oracle Tutor: Document Audit and Maintenance

    - by Emily Chorba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Perhaps the most critical phase in the process of documenting policies and procedure -- and the greatest challenge to owners -- is the maintenance of published documents. Documents must reflect current practice and they must be accurate. The most effective way to ensure this is through the regular audit of documents. In the Tutor environment, a Document Owner must audit each of his/her documents once every 6 to 12 months to verify that the document reflects actual practice. If it does not, the document is updated or employees are retrained (depending on the nature of the discrepancy). If a document update is required, the Tutor system enables the owner to modify and redistribute the document within one work day. This is possible because: Documents contain a minimum of detail, thereby reducing the edits. Document format and structure are simple, so changes are easy to identify The Tutor Author software tool enables the Document Owner or the Document Administrator to update the file quickly. The Document Administrator verifies the document format and integration, publishes the document, and distributes it to all affected employees, thereby freeing the Document Owner of the more tedious tasks. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & UPK

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >