Search Results

Search found 3888 results on 156 pages for 'star schema'.

Page 123/156 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • SQL SERVER – Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies

    - by pinaldave
    A very common question which I often receive are: How do I find all the tables used in a particular stored procedure? How do I know which stored procedures are using a particular table? Both are valid question but before we see the answer of this question – let us understand two small concepts – Referenced and Referencing. Here is the sample stored procedure. CREATE PROCEDURE mySP AS SELECT * FROM Sales.Customer GO Reference: The table Sales.Customer is the reference object as it is being referenced in the stored procedure mySP. Referencing: The stored procedure mySP is the referencing object as it is referencing Sales.Customer table. Now we know what is referencing and referenced object. Let us run following queries. I am using AdventureWorks2012 as a sample database. If you do not have SQL Server 2012 here is the way to get SQL Server 2012 AdventureWorks database. Find Referecing Objects of a particular object Here we are finding all the objects which are using table Customer in their object definitions (regardless of the schema). USE AdventureWorks GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE referenced_entity_name = 'Customer' The above query will return all the objects which are referencing the table Customer. Find Referenced Objects of a particular object Here we are finding all the objects which are used in the view table vIndividualCustomer. USE AdventureWorks GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE o.name = 'vIndividualCustomer' The above query will return all the objects which are referencing the table Customer. I am just glad to write above query. There are more to write to this subject. In future blog post I will write more in depth about other DMV which also aids in finding referenced data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g

    - by Jason Williamson
    I'm excited to say that we've released our next generation of Re-hosting in 11g. In fact I'm doing some hands-on labs now for our Systems Integrators in Italy in a couple of weeks and targeting Latin America next month. If you are an SI, or Rehosting firm and are looking to become an Oracle Partner or get a better understanding of Tuxedo and how to use the workbench for rehosting...drop me a line. Oracle Tuxedo Application Runtime for CICS and Batch 11g provides a CICS API emulation and Batch environment that exploits the full range of Oracle Tuxedo's capabilities. Re-hosted applications run in a multi-node, grid environment with centralized production control. Also, enterprise integration of CICS application services benefits from an open and SOA-enabled framework. Key features include: CICS Application Runtime: Can run IBM CICS applications unchanged in an application grid, which enables the distribution of large workloads across multiple processors and nodes. This simplifies CICS administration and can scale to over 100,000 users and over 50,000 transactions per second. 3270 Terminal Server: Protects business users from change through support for tn3270 terminal emulation. Distributed CICS Resource Management: Simplifies deployment and administration by allowing customers to run CICS regions in a distributed configuration. Batch Application Runtime: Provides robust IBM JES-like job management that enables local or remote job submissions. In addition, distributed batch initiators can enable parallelization of jobs and support fail-over, shortening the batch window and helping to meet stringent SLAs. Batch Execution Environment: Helps to run IBM batch unchanged and also supports JCL functionality and all common batch utilities. Oracle Tuxedo Application Rehosting Workbench 11g provides a set of automated migration tools integrated around a central repository. The tools provide high precision which results in very low error rates and the ability to handle large applications. This enables less expensive, low-risk migration projects. Key capabilities include: Workbench Repository and Cataloguer: Ensures integrity of the migrated application assets through full dependency checking. The Cataloguer generates and maintains all relevant meta-data on source and target components. File Migrator: Supports reliable migration of datasets and flat files to an ISAM or Oracle Database 11g. This is done through the automated migration utilities for data unloading, reloading and validation. It also generates logical access functions to shield developers from data repository changes. DB2 Migrator: Similarly, this tool automates the migration of DB2 schema and data to Oracle Database 11g. COBOL Migrator: Supports migration of IBM mainframe COBOL assets (OLTP and Batch) to open systems. Adapts programs for compiler dialects and data access variations. JCL Migrator: Supports migration of IBM JCL jobs to a Tuxedo ART environment, maintaining the flow and characteristics of batch jobs.

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • Search and Browse Database Objects with Oracle SQL Developer

    - by thatjeffsmith
    I was tempted to throw in another Dora the Explorer Map reference here, but I came to my senses.Having trouble finding something? Maybe you’re just getting older? I know I am. But still, it’d be nice if my favorite database tool could help me out a bit. Hmmm, what’s this ‘Find Database Object‘ thing over here…sounds like a search mechanism of some sort? You can access this panel from the ‘View‘ menu. It’s a good bit down the screen, so I don’t blame you if you haven’t seen it before. It makes finding ‘stuff’ in your database so much easier. Let’s say I want to find my ‘beer’ objects. I simply need to type my search string and the context (in this case I want it to search EVERYTHING), and hit enter. The search results are listed below and clicking on an object automatically opens it! I know it seems very simple, but I get asked this question a LOT. It will even search through your PL/SQL code! Finding too much? Be sure to toggle off the ‘%’ wildcard check box before doing a search. Working on a Project? I bet you use common column names, or codes, throughout your tables. You could take advantage of this knowledge and use the Find Database Object panel as a substitute connection tree or schema browser. Working on your HR project and want to look at your employee objects? Do a column search for your column ID/key. Sometimes thinking outside the box actually works! Don’t be afraid to tackle a problem from a weird angle, or re-purpose your tools. I do it all the time And I drive the developers nuts trying to do things with the tools they were never designed to do. But I digress. Back to your coding!

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • VSDB to SSDT part 4 : Redistributable database deployment package with SqlPackage.exe

    - by Etienne Giust
    The goal here is to use SSDT SqlPackage to deploy the output of a Visual Studio 2012 Database project… a bit in the same fashion that was detailed here : http://geekswithblogs.net/80n/archive/2012/09/12/vsdb-to-ssdt-part-3--command-line-deployment-with-sqlpackage.exe.aspx   The difference is we want to do it on an environment where Visual Studio 2012 and SSDT are not installed. This might be the case of your Production server.   Package structure So, to get started you need to create a folder named “DeploymentSSDTRedistributable”. This folder will have the following structure :         The dacpac and dll files are the outputs of your Visual Studio 2012 Database project. If your database project references another database project, you need to put their dacpac and dll here too, otherwise deployment will not work. The publish.xml file is the publish configuration suitable for your target environment. It holds connexion strings, SQLVARS parameters and deployment options. Review it carefully. The SqlDacRuntime folder (an arbitrary chosen name) will hold the SqlPackage executable and supporting libraries   Contents of the SqlDacRuntime folder Here is what you need to put in the SqlDacRuntime folder  :      You will be able to find these files in the following locations, on a machine with Visual Studio 2012 Ultimate installed : C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin : SqlPackage.exe Microsoft.Data.Tools.Schema.Sql.dll  Microsoft.Data.Tools.Utilities.dll Microsoft.SqlServer.Dac.dll C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.SqlServer.TransactSql.ScriptDom\v4.0_11.0.0.0__89845dcd8080cc91 Microsoft.SqlServer.TransactSql.ScriptDom.dll   Deploying   Now take your DeploymentSSDTRedistributable deployment package to your remote machine. In a standard command window, place yourself inside the DeploymentSSDTRedistributable  folder.   You can first perform a check of what will be updated in the target database. The DeployReport task of SqlPackage.exe will help you do that. The following command will output an xml of the changes:   "SqlDacRuntime/SqlPackage.exe" /Action:DeployReport /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml /OutputPath:./ChangesToDeploy.xml      You might get some warnings on Log and Data file like I did. You can ignore them. Also, the tool is warning about data loss when removing a column from a table. By default, the publish.xml options will prevent you from deploying when data loss is occuring (see the BlockOnPossibleDataLoss inside the publish.xml file). Before actual deployment, take time to carefully review the changes to be applied in the ChangesToDeploy.xml file.    When you are satisfied, you can deploy your changes with the following command : "SqlDacRuntime/SqlPackage.exe" /Action:Publish /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml   Et voilà !  Your dacpac file has been deployed to your database. I’ve been testing this on a SQL 2008 Server (not R2) but it should work on 2005, 2008 R2 and 2012 as well.   Many thanks to Anuj Chaudhary for his article on the subject : http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html

    Read the article

  • 2 Days to Go before MySQL Connect - Focus on Hands-On Labs

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle MySQL team is very eager to meet all MySQL community members, users, customers and partners gathering this weekend in San Francisco for MySQL Connect! Eight different Hands-On Labs will give you the opportunity to get hands-on experience on the following topics. All taking place in Plaza Room A. Saturday: 11.30 amDeveloping Applications with MySQL and Java—Mark Matthews, Oracle 1.00 pm (2.5 hours long)Getting Started with MySQL—Gillian Gunson and Alfredo Kojima, Oracle 4.00 pmGetting Started with MySQL Cluster—Santo Leto, Oracle 5.30 pmImproving Performance with the MySQL Performance Schema—Jesper Krogh, Oracle Sunday: 10.15 am (2.5 hours long) Focus on MySQL Replication—Sven Sandberg and Luis Soares, Oracle 1.15 pm MySQL Utilities—Charles Bell, Oracle 2.45 pm Performance Tuning with MySQL Enterprise Monitor—Mark Matthews, Oracle 4.15 pm MySQL Security: Authentication and Audit—Jonathon Coombes, Oracle Not registered yet? You can still save US$ 300 off the on-site fee! Attending Oracle openWorld or JavaOne? Add MySQL Connect to your registration for only US$100! Register Now!

    Read the article

  • Cooking with Wessty: HTML 5 and Visual Studio

    - by David Wesst
    The hardest part about using a new technology, such as HTML 5, is getting to what features are available and the syntax. One way to learn how to use new technologies is to adapt your current development to help you use the technology in comfort of your own development environment. For .NET Web Developers, that environment is usually Visual Studio 2010. This technique intends on showing you how to get HTML 5 Intellisense working in your current version of Visual Studio 2008 or 2010, making it easier for you to start using HTML 5 features in your current .NET web development projects. Quick Note According to the Visual Web Developer team at Microsoft, the Visual Studio 2010 SP1 beta has support for both HTML 5 and CSS 3. If you are willing to try out the bleeding edge update from Microsoft, then you won’t need this technique. --- Ingredients Visual Studio 2008 or 2010 Your favourite HTML 5 compliant browser (e.g. Internet Explorer 9) Administrator privileges, or the ability to install Visual Studio Extensions in your development environment. Directions Download the HTML 5 Intellisense for Visual Studio 2008 and 2010 extension from the Visual Studio Extension Gallery. Install it. Open Visual Studio. Open up a web file, such as an HTML or ASPX file. he HTML Source Editing toolbar should have appeared. (Optional) If it did not appear, you can activate it through the main menu by selecting View, then Toolbars, and then select HTML Source Editing if it does not have a checkbox beside it. (NOTE: If there is a checkbox, then the toolbar is enabled) In the HTML Source Editing toolbar, open up the validation schema drop box, and select HTML 5. Et voila! You now have HTML 5 intellisense enabled to help you get started in adding HTML 5 awesomeness to your web sites and web applications. Optional – Setting HTML 5 Validation Options At this point, you may want to select how Visual Studio shows validation errors. You can do that in the Options Menu. To get to the Options Menu… In the main menu select Tools, then Options. In the Options window, select and expand Text Editor, then HTML, followed by selecting Validation. Resources HTML 5 Intellisense for Visual Studio 2008 and 2010 extenstion Visual Studio Extension Gallery Visual Studio 2010 SP1 Beta This post also appears at http://david.wes.st

    Read the article

  • Why You Should Attend MySQL Connect, and Register Now

    - by Bertrand Matthelié
    MySQL Connect is taking place on September 29 and 30 in San Francisco. The early bird discount enabling you to save US$ 500 is only running for a few more days, until July 13. Are you still wondering if you should sign up? Here are 10 reasons why you definitely should: Learn from other companies how they tackled similar challenges to the ones you’re facing. Find out what they learned along the way, and how you can save time, money and a lot of troubles by avoiding repeating the same mistakes and applying the best practices they’ve developed. You’ll get the chance to hear from organizations including PayPal, Verizon, Twitter, Facebook, Ticketmaster, Ning, Mozilla, CERN, Yahoo! and more! Don’t miss this unique opportunity to meet the engineers developing and supporting the MySQL products in a single location. You’ll be able to ask them all your questions, which can represent a huge time and money saver. Acquire detailed knowledge about InnoDB, the MySQL Optimizer, High Availability strategies, improving performance and scalability, enhancing security and numerous other topics. You’ll hear it straight "from the horse’s mouth" as well as from other MySQL experts in the ecosystem. Get a better understanding about Oracle’s MySQL strategy and about the MySQL roadmap, so you can better plan where to use the MySQL database and MySQL Cluster for your next web, cloud-based and other applications. Get hands-on experience about improving performance with the MySQL Performance Schema, about using MySQL Utilities, MySQL Cluster and a lot more with eight different Hands-On Labs. Express your ideas, engage into discussions and help influence the MySQL roadmap during Birds-of-a-feather sessions about replication, backup, query optimizations and other topics. Meet partners and learn about third party tools that could be useful in your architecture. Immerse yourself into the MySQL universe and hang out with MySQL experts for two days. The discussions as well as the relationships you will create can be priceless and help you execute on your next projects in a much better and faster way. Register Now to save US$500 by taking advantage of the Early bird discount running until July 13. We’ll have parallel tracks so you should consider sending a few team members to make the most of the event. Are you attending or planning to attend Oracle OpenWorld or JavaOne? You can add MySQL Connect to your registration for only US$100! Finally, it’s always a lot of fun to attend a MySQL conference. The passion and the energy are contagious…and you’ll likely get plenty of new ideas. You will find all information about the program in the MySQL Connect Content Catalog. We look forward to seeing you there! You can also read interviews with Tomas Ulin and Ronald Bradford about MySQL Connect. Sponsorship and exhibit opportunities are still available for the conference. You will find more information here.

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • When to use HTTP status code 404 in an API

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=... /GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • Filtering option list values based on security in UCM

    - by kyle.hatlestad
    Fellow UCM blog writer John Sim recently posted a comment asking about filtering values based on the user's security. I had never dug into that detail before, but thought I would take a look. It ended up being tricker then I originally thought and required a bit of insider knowledge, so I thought I would share. The first step is to create the option list table in Configuration Manager. You want to define the column for the option list value and any other columns desired. You then want to have a column which will store the security attribute to apply to the option list value. In this example, we'll name the column 'dGroupName'. Next step is to create a View based on the new table. For the Internal and Visible column, you can select the option list column name. Then click on the Security tab, uncheck the 'Publish view data' checkbox and select the 'Use standard document security' radio button. Click on the 'Edit Values...' button and add the values for the option list. In the dGroupName field, enter the Security Group (or Account if you use Accounts for security) to apply to that value. Create the custom metadata field and apply the View just created. The next step requires file system access to the server. Open the file [ucm directory]\data\schema\views\[view name].hda in a text editor. Below the line '@Properties LocalData', add the line: schSecurityImplementorColumnMap=dGroupName:dSecurityGroup The 'dGroupName' value designates the column in the table which stores the security value. 'dSecurityGroup' indicates the type of security to check against. It would be 'dDocAccount' if using Accounts. Save the file and restart UCM. Now when a user goes to the check-in page, they will only see the options for which they have read and write privileges to the associated Security Group. And on the Search page, they will see the options for which they have just read access. One thing to note is if a value that a user normally can't view on Check-in or Search is applied to a document, but the document is viewable by the user, the user will be able to see the value on the Content Information screen.

    Read the article

  • Java Spotlight Episode 111: Bruno Souza @brjavaman and Fabiane Nardon @fabianenardonon StoryTroop @storytroop

    - by Roger Brinkley
    Interview with Bruno Souza and Fabiane Nardon on StoryTroop. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News End of Puplic Updates for JDK 6 Bean Valdiation 1.1 public review approved Two key JSRs accepted in time for JavaEE7 Public_JCP EC_meeting_audio_and materials posted Devoxx UK and Devoxx France CFP open JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India JCP Spec Lead Call December on Developing a TCK JCP EC Face to Face Meeting, January 15-16, West Coast USA Feature InterviewBruno Souza is a Java Developer and Open Source Evangelist at Summa Technologies, and a Cloud Expert at ToolsCloud. Nurturing developer communities is a personal passion, and Bruno worked actively with Java, NetBeans, Open Solaris, OFBiz, and many other open source communities. As founder and coordinator of SouJava (The Java Users Society), one of the world's largest Java User Groups, Bruno leaded the expansion of the Java movement in Brazil. Founder of the Worldwide Java User Groups Community, Bruno helped the creation and organization of hundreds of JUGs worldwide. A Java Developer since the early days, Bruno participated in some of the largest Java projects in Brazil.Fabiane Nardon is a computer scientist who is passionate about creating software that will positively change the world we live in. She was the architect of the Brazilian Healthcare Information System, considered the largest JavaEE application in the world and winner of the 2005 Duke's Choice Award. She leaded several communities, including the JavaTools Community at java.net, where 800+ open source projects were born. She is a frequent speaker at conferences in Brazil and abroad, including JavaOne, OSCON, Jfokus, JustJava and more. She’s also the author of several technical articles and member of the program committee of several conferences as JavaOne, OSCON, TDC. She was chosen a Java Champion by Sun Microsystems as a recognition of her contribution to the Java ecosystem. Currently, she works as a tools expert at ToolsCloud and in companies she co-founded, where she is helping to shape new disruptive Internet based services.StoryTroop is a space where we combine multiple perspectives about a story. This creates an understanding of that story like never seen before. Pieces of a story are organized in time and space and anyone can add a different perspective.What’s Cool Geek Bike Ride at JavaOne LAD Devoxx UK (Mar 26, 27) and FR (Mar 27 - 29) CFP jFokus schedule is firming up Nashorn Blog 1,500 @JavaSpotlight Twitter followers

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Top 10 for November 18-24, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 18-24, 2012. One-Stop Shop for over 200 On-Demand Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Fault Handling and Prevention - Part 1 | Guido Schmutz and Ronald van Luttikhuizen In this technical article, part one of a four part series, Oracle ACE Directors Guido Schmutz and Ronald van Luttikhuizen guide you through an introduction to fault handling in a service-oriented environment using Oracle SOA Suite and Oracle Service Bus. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Videos: Getting Started with Java Embedded | The Java Source Interested in Java Embedded? You'll want to check out these videos provided Tori Weildt, including interviews with Oracle's James Allen and Kevin Smith, recorded at ARM TechCon. JPA SQL and Fetching tuning ( EclipseLink ) | Edwin Biemond Oracle ACE Edwin Biemond's post illustrates how to "use the department and employee entity of the HR Oracle demo schema to explain the JPA options you have to control the SQL statements and the JPA relation Fetching." Devoxx 2012 Trip Report - clouds and sunshine | Markus Eisele Oracle ACE Director Markus Eisele shares an extensive and entertaining account of his experience at Devoxx 2012. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Java Specification Requests in Numbers | Markus Eisele Oracle ACE Director Markus Eisele shares some interesting data culled from the Java Community Process site. Thought for the Day "You can't have great software without a great team, and most software teams behave like dysfunctional families." — Jim McCarthy Source: SoftwareQuotes.com

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >