Search Results

Search found 14448 results on 578 pages for 'schema org'.

Page 118/578 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • How to connect FreeBSD Jail to network

    - by jgtumusiime
    So recently I successfully installed and configured a freebsd jail and I would like to install software within my jail but I cannot connect to the network. I'm trying to setup an apache+php+mysql installation within the jail and have the webserver accessible by users. Here is my rc.conf for the jail. ... jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# It does not show the IP address I configured within /etc/rc.conf. But, when I list the running jails, it shows the right IP address. Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 I also created a /etc/resolv.conf for nameservers. This was not in existence so I'm not quite sure if it is necessary? mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/src]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/src]# I have an updated copy of freebsd 7.1 source tree from SVN in /usr/src/ [root@mambo /usr/src]# svn info Path: . URL: http://svn.freebsd.org/base/release/7.1.0 Repository Root: http://svn.freebsd.org/base Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f Revision: 243371 Node Kind: directory Schedule: normal Last Changed Author: kensmith Last Changed Rev: 186660 Last Changed Date: 2009-01-01 01:57:14 +0300 (Thu, 01 Jan 2009) [root@mambo /usr/src]# and I did #make buildworld while building the first jail i.e mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all in vail. Thank you

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Importing Multiple Schemas to a Model in Oracle SQL Developer Data Modeler

    - by thatjeffsmith
    Your physical data model might stretch across multiple Oracle schemas. Or maybe you just want a single diagram containing tables, views, etc. spanning more than a single user in the database. The process for importing a data dictionary is the same, regardless if you want to suck in objects from one schema, or many schemas. Let’s take a quick look at how to get started with a data dictionary import. I’m using Oracle SQL Developer in this example. The process is nearly identical in Oracle SQL Developer Data Modeler – the only difference being you’ll use the ‘File’ menu to get started versus the ‘File – Data Modeler’ menu in SQL Developer. Remember, the functionality is exactly the same whether you use SQL Developer or SQL Developer Data Modeler when it comes to the data modeling features – you’ll just have a cleaner user interface in SQL Developer Data Modeler. Importing a Data Dictionary to a Model You’ll want to open or create your model first. You can import objects to an existing or new model. The easiest way to get started is to simply open the ‘Browser’ under the View menu. The Browser allows you to navigate your open designs/models You’ll see an ‘Untitled_1′ model by default. I’ve renamed mine to ‘hr_sh_scott_demo.’ Now go back to the File menu, and expand the ‘Data Modeler’ section, and select ‘Import – Data Dictionary.’ This is a fancy way of saying, ‘suck objects out of the database into my model’ Connect! If you haven’t already defined a connection to the database you want to reverse engineer, you’ll need to do that now. I’m going to assume you already have that connection – so select it, and hit the ‘Next’ button. Select the Schema(s) to be imported Select one or more schemas you want to import The schemas selected on this page of the wizard will dictate the lists of tables, views, synonyms, and everything else you can choose from in the next wizard step to import. For brevity, I have selected ALL tables, views, and synonyms from 3 different schemas: HR SCOTT SH Once I hit the ‘Finish’ button in the wizard, SQL Developer will interrogate the database and add the objects to our model. The Big Model and the 3 Little Models I can now see ALL of the objects I just imported in the ‘hr_sh_scott_demo’ relational model in my design tree, and in my relational diagram. Quick Tip: Oracle SQL Developer calls what most folks think of as a ‘Physical Model’ the ‘Relational Model.’ Same difference, mostly. In SQL Developer, a Physical model allows you to define partitioning schemes, advanced storage parameters, and add your PL/SQL code. You can have multiple physical models per relational models. For example I might have a 4 Node RAC in Production that uses partitioning, but in test/dev, only have a single instance with no partitioning. I can have models for both of those physical implementations. The list of tables in my relational model Wouldn’t it be nice if I could segregate the objects based on their schema? Good news, you can! And it’s done by default Several of you might already know where I’m going with this – SUBVIEWS. You can easily create a ‘SubView’ by selecting one or more objects in your model or diagram and add them to a new SubView. SubViews are just mini-models. They contain a subset of objects from the main model. This is very handy when you want to break your model into smaller, more digestible parts. The model information is identical across the model and subviews, so you don’t have to worry about making a change in one place and not having it propagate across your design. SubViews can be used as filters when you create reports and exports as well. So instead of generating a PDF for everything, just show me what’s in my ‘ABC’ subview. But, I don’t want to do any work! Remember, I’m really lazy. More good news – it’s already done by default! The schemas are automatically used to create default SubViews Auto-Navigate to the Object in the Diagram In the subview tree node, right-click on the object you want to navigate to. You can ask to be taken to the main model view or to the SubView location. If you haven’t already opened the SubView in the diagram, it will be automatically opened for you. The SubView diagram only contains the objects from that SubView Your SubView might still be pretty big, many dozens of objects, so don’t forget about the ‘Navigator‘ either! In summary, use the ‘Import’ feature to add existing database objects to your model. If you import from multiple schemas, take advantage of the default schema based SubViews to help you manage your models! Sometimes less is more!

    Read the article

  • Database Table Prefixes

    - by DoctorMick
    We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: Create the different functional areas in separate schemas. Create everything in the same schema but prefix the tables with the functional area Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is.

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • Visual Studio IntelliSense for URL Rewrite

    - by OWScott
    Visual Studio doesn’t have IntelliSense support for URL Rewrite by default.  This isn’t a show stopper since it doesn’t result in stop errors. However, it’s nice to have full IntelliSense support and to get rid of the warnings for URL Rewrite rules. RuslanY has released a Visual Studio schema update for URL Rewrite 2.0 which is available as a free quick download.  The installation instructions (they are quick and easy) can be found here, which also include the schema for URL Rewrite 1.1.   The install takes effect immediately without restarting Visual Studio. A side question commonly comes up.  Can you get URL Rewrite support for Visual Studio Web Server (aka Cassini).  The answer is no.  To get URL Rewrite support in your development environment, use IIS7.  You can set your Visual Studio projects to use IIS7 though, so you can have full debug, F5 or Ctrl-F5 support for IIS.

    Read the article

  • JBoss AS 5: starts but can't connect (Windows, remote)

    - by Nuwan
    Hello I installed Jboss 5.0GA and Its works fine in localhost.But I want It to access through remote Machine.Then I bind my IP address to my server and started it.This is the command I used run.bat -b 10.17.62.63 Then the server Starts fine This is the console log when starting the server > =============================================================================== > > JBoss Bootstrap Environment > > JBOSS_HOME: C:\jboss-5.0.0.GA > > JAVA: C:\Program Files\Java\jdk1.6.0_34\bin\java > > JAVA_OPTS: -Dprogram.name=run.bat -server -Xms128m -Xmx512m > -XX:MaxPermSize=25 6m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Ds un.rmi.dgc.server.gcInterval=3600000 > > CLASSPATH: C:\jboss-5.0.0.GA\bin\run.jar > > =============================================================================== > > run.bat: unused non-option argument: ûb run.bat: unused non-option > argument: 0.0.0.0 13:43:38,179 INFO [ServerImpl] Starting JBoss > (Microcontainer)... 13:43:38,179 INFO [ServerImpl] Release ID: JBoss > [Morpheus] 5.0.0.GA (build: SV NTag=JBoss_5_0_0_GA date=200812041714) > 13:43:38,179 INFO [ServerImpl] Bootstrap URL: null 13:43:38,179 INFO > [ServerImpl] Home Dir: C:\jboss-5.0.0.GA 13:43:38,179 INFO > [ServerImpl] Home URL: file:/C:/jboss-5.0.0.GA/ 13:43:38,195 INFO > [ServerImpl] Library URL: file:/C:/jboss-5.0.0.GA/lib/ 13:43:38,195 > INFO [ServerImpl] Patch URL: null 13:43:38,195 INFO [ServerImpl] > Common Base URL: file:/C:/jboss-5.0.0.GA/common/ > > 13:43:38,195 INFO [ServerImpl] Common Library URL: > file:/C:/jboss-5.0.0.GA/comm on/lib/ 13:43:38,195 INFO [ServerImpl] > Server Name: default 13:43:38,195 INFO [ServerImpl] Server Base Dir: > C:\jboss-5.0.0.GA\server 13:43:38,195 INFO [ServerImpl] Server Base > URL: file:/C:/jboss-5.0.0.GA/server/ > > 13:43:38,210 INFO [ServerImpl] Server Config URL: > file:/C:/jboss-5.0.0.GA/serve r/default/conf/ 13:43:38,210 INFO > [ServerImpl] Server Home Dir: C:\jboss-5.0.0.GA\server\defaul t > 13:43:38,210 INFO [ServerImpl] Server Home URL: > file:/C:/jboss-5.0.0.GA/server/ default/ 13:43:38,210 INFO > [ServerImpl] Server Data Dir: C:\jboss-5.0.0.GA\server\defaul t\data > 13:43:38,210 INFO [ServerImpl] Server Library URL: > file:/C:/jboss-5.0.0.GA/serv er/default/lib/ 13:43:38,210 INFO > [ServerImpl] Server Log Dir: C:\jboss-5.0.0.GA\server\default \log > 13:43:38,210 INFO [ServerImpl] Server Native Dir: > C:\jboss-5.0.0.GA\server\defa ult\tmp\native 13:43:38,210 INFO > [ServerImpl] Server Temp Dir: C:\jboss-5.0.0.GA\server\defaul t\tmp > 13:43:38,210 INFO [ServerImpl] Server Temp Deploy Dir: > C:\jboss-5.0.0.GA\server \default\tmp\deploy 13:43:39,710 INFO > [ServerImpl] Starting Microcontainer, bootstrapURL=file:/C:/j > boss-5.0.0.GA/server/default/conf/bootstrap.xml 13:43:40,851 INFO > [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.pl > ugins.cache.IterableTimedVFSCache] 13:43:40,866 INFO > [VFSCacheFactory] Using VFSCache [IterableTimedVFSCache{lifet > ime=1800, resolution=60}] 13:43:41,616 INFO [CopyMechanism] VFS temp > dir: C:\jboss-5.0.0.GA\server\defaul t\tmp 13:43:41,648 INFO > [ZipEntryContext] VFS force nested jars copy-mode is enabled. > > 13:43:44,288 INFO [ServerInfo] Java version: 1.6.0_34,Sun > Microsystems Inc. 13:43:44,288 INFO [ServerInfo] Java VM: Java > HotSpot(TM) Server VM 20.9-b04,Sun Microsystems Inc. 13:43:44,288 > INFO [ServerInfo] OS-System: Windows XP 5.1,x86 13:43:44,569 INFO > [JMXKernel] Legacy JMX core initialized 13:43:50,148 INFO > [ProfileServiceImpl] Loading profile: default from: org.jboss > .system.server.profileservice.repository.SerializableDeploymentRepository@e72f0c > (root=C:\jboss-5.0.0.GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b > 82c3[domain=default,server=default,name=default]) 13:43:50,148 INFO > [ProfileImpl] Using repository:org.jboss.system.server.profil > eservice.repository.SerializableDeploymentRepository@e72f0c(root=C:\jboss-5.0.0. > GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,s > erver=default,name=default]) 13:43:50,148 INFO [ProfileServiceImpl] > Loaded profile: ProfileImpl@8b3bb3{key=o > rg.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,na > me=default]} 13:43:54,804 INFO [WebService] Using RMI server > codebase: http://127.0.0.1:8083 / 13:44:12,147 INFO [CXFServerConfig] > JBoss Web Services - Stack CXF Runtime Serv er 13:44:12,147 INFO > [CXFServerConfig] 3.1.2.GA 13:44:29,788 INFO > [Ejb3DependenciesDeployer] Encountered deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:37,116 INFO [JMXConnectorServerService] JMX Connector > server: service:jmx > :rmi://127.0.0.1/jndi/rmi://127.0.0.1:1090/jmxconnector 13:44:38,022 > INFO [MailService] Mail Service bound to java:/Mail 13:44:43,162 WARN > [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RI SK. It > has been detected that the MessageSucker component which sucks > messages f rom one node to another has not had its password changed > from the installation d efault. Please see the JBoss Messaging user > guide for instructions on how to do this. 13:44:43,209 WARN > [AnnotationCreator] No ClassLoader provided, using TCCL: org. > jboss.managed.api.annotation.ManagementComponent 13:44:43,600 INFO > [TransactionManagerService] JBossTS Transaction Service (JTA version) > - JBoss Inc. 13:44:43,600 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer 13:44:44,366 INFO > [TransactionManagerService] Initializing recovery manager 13:44:44,678 > INFO [TransactionManagerService] Recovery manager configured > 13:44:44,678 INFO [TransactionManagerService] Binding > TransactionManager JNDI R eference 13:44:44,787 INFO > [TransactionManagerService] Starting transaction recovery man ager > 13:44:46,428 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on > http-127.0.0 .1-8080 13:44:46,459 INFO [AjpProtocol] Initializing > Coyote AJP/1.3 on ajp-127.0.0.1-80 09 13:44:46,459 INFO > [StandardService] Starting service jboss.web 13:44:46,475 INFO > [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.1.GA > 13:44:46,616 INFO [Catalina] Server startup in 350 ms 13:44:46,709 > INFO [TomcatDeployment] deploy, ctxPath=/web-console, vfsUrl=manag > ement/console-mgr.sar/web-console.war 13:44:48,553 INFO > [TomcatDeployment] deploy, ctxPath=/juddi, vfsUrl=juddi-servi > ce.sar/juddi.war 13:44:48,678 INFO [RegistryServlet] Loading jUDDI > configuration. 13:44:48,694 INFO [RegistryServlet] Resources loaded > from: /WEB-INF/juddi.prope rties 13:44:48,709 INFO [RegistryServlet] > Initializing jUDDI components. 13:44:48,991 INFO [TomcatDeployment] > deploy, ctxPath=/invoker, vfsUrl=http-invo ker.sar/invoker.war > 13:44:49,162 INFO [TomcatDeployment] deploy, ctxPath=/jbossws, > vfsUrl=jbossws.s ar/jbossws-management.war 13:44:49,475 INFO > [RARDeployment] Required license terms exist, view vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml > 13:44:49,569 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml > 13:44:49,741 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml > 13:44:49,819 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml > 13:44:49,912 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml > 13:44:50,069 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:50,115 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:50,131 INFO [RAMJobStore] > RAMJobStore initialized. 13:44:50,131 INFO [StdSchedulerFactory] > Quartz scheduler 'DefaultQuartzSchedule r' initialized from default > resource file in Quartz package: 'quartz.properties' > > 13:44:50,131 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:50,131 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUS TERED started. 13:44:51,194 INFO > [ConnectionFactoryBindingService] Bound ConnectionManager 'jb > oss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name > 'java:DefaultDS' 13:44:51,819 WARN [QuartzTimerServiceFactory] sql > failed: CREATE TABLE QRTZ_JOB > _DETAILS(JOB_NAME VARCHAR(80) NOT NULL, JOB_GROUP VARCHAR(80) NOT NULL, DESCRIPT ION VARCHAR(120) NULL, JOB_CLASS_NAME VARCHAR(128) NOT > NULL, IS_DURABLE VARCHAR( 1) NOT NULL, IS_VOLATILE VARCHAR(1) NOT > NULL, IS_STATEFUL VARCHAR(1) NOT NULL, R EQUESTS_RECOVERY VARCHAR(1) > NOT NULL, JOB_DATA BINARY NULL, PRIMARY KEY (JOB_NAM E,JOB_GROUP)) > 13:44:51,912 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:51,928 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:51,928 INFO [JobStoreCMT] > Using db table-based data access locking (synch ronization). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Trigger(s). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Job(s). > 13:44:51,944 INFO [JobStoreCMT] JobStoreCMT initialized. 13:44:51,944 > INFO [StdSchedulerFactory] Quartz scheduler 'JBossEJB3QuartzSchedu > ler' initialized from an externally provided properties instance. > 13:44:51,959 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:51,959 INFO [JobStoreCMT] Freed 0 triggers from 'acquired' / 'blocked' st ate. 13:44:51,975 INFO [JobStoreCMT] > Recovering 0 jobs that were in-progress at the time of the last > shut-down. 13:44:51,975 INFO [JobStoreCMT] Recovery complete. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 'complete' triggers. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 stale fired job entries. > 13:44:51,990 INFO [QuartzScheduler] Scheduler > JBossEJB3QuartzScheduler_$_NON_CL USTERED started. 13:44:52,381 INFO > [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started 13:44:52,569 > INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pa > geSize=2000, downCacheSize=2000 13:44:52,584 INFO [QueueService] > Queue[/queue/ExpiryQueue] started, fullSize=20 0000, pageSize=2000, > downCacheSize=2000 13:44:52,709 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,709 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1a8ac5e > started 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsFailover attribute is t rue on connection factory: > jboss.messaging.connectionfactory:service=ClusteredCo nnectionFactory > but post office is non clustered. So connection factory will *no t* > support failover 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsLoadBalancing attribute is true on connection factory: > jboss.messaging.connectionfactory:service=Cluste redConnectionFactory > but post office is non clustered. So connection factory wil l *not* > support load balancing 13:44:52,740 INFO [ConnectionFactory] > Connector bisocket://127.0.0.1:4457 has l easing enabled, lease period > 10000 milliseconds 13:44:52,740 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1d43178 > started 13:44:52,740 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,756 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@52728a > started 13:44:53,084 INFO [ConnectionFactoryBindingService] Bound > ConnectionManager 'jb > oss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name > 'java:JmsXA' 13:44:53,225 INFO [TomcatDeployment] deploy, ctxPath=/, > vfsUrl=ROOT.war 13:44:53,553 INFO [TomcatDeployment] deploy, > ctxPath=/jmx-console, vfsUrl=jmx-c onsole.war 13:44:53,975 INFO > [TomcatDeployment] deploy, ctxPath=/TestService, vfsUrl=TestS > erviceEAR.ear/TestService.war 13:44:55,662 INFO [JBossASKernel] > Created KernelDeployment for: myE-ejb.jar 13:44:55,709 INFO > [JBossASKernel] installing bean: jboss.j2ee:jar=myE-ejb.jar,n > ame=RPSService,service=EJB3 13:44:55,725 INFO [JBossASKernel] with > dependencies: 13:44:55,725 INFO [JBossASKernel] and demands: > 13:44:55,725 INFO [JBossASKernel] > jboss.ejb:service=EJBTimerService 13:44:55,725 INFO [JBossASKernel] > and supplies: 13:44:55,725 INFO [JBossASKernel] > jndi:RPSService/remote 13:44:55,725 INFO [JBossASKernel] Added > bean(jboss.j2ee:jar=myE-ejb.jar,name=RP SService,service=EJB3) to > KernelDeployment of: myE-ejb.jar 13:44:56,772 INFO > [SessionSpecContainer] Starting jboss.j2ee:jar=myE-ejb.jar,na > me=RPSService,service=EJB3 13:44:56,803 INFO [EJBContainer] STARTED > EJB: com.monz.rpz.RPSService ejbName: RPSService 13:44:56,819 INFO > [JndiSessionRegistrarBase] Binding the following Entries in G lobal > JNDI: > > > 13:44:57,381 INFO [DefaultEndpointRegistry] register: > jboss.ws:context=myE-ejb, endpoint=RPSService 13:44:57,428 INFO > [DescriptorDeploymentAspect] Add Service id=RPSService > address=http://127.0.0.1:8080/myE-ejb/RPSService > implementor=com.monz.rpz.RPSService > invoker=org.jboss.wsf.stack.cxf.InvokerEJB3 mtomEnabled=false > 13:44:57,459 INFO [DescriptorDeploymentAspect] JBossWS-CXF > configuration genera ted: > file:/C:/jboss-5.0.0.GA/server/default/tmp/jbossws/jbossws-cxf1864137209199 > 110130.xml 13:44:57,569 INFO [TomcatDeployment] deploy, ctxPath=/myE-ejb, vfsUrl=myE-ejb.j ar 13:44:57,709 WARN [config] > Unable to process deployment descriptor for context '/myE-ejb' > 13:44:59,334 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on > http-127.0.0.1-8 080 13:44:59,397 INFO [AjpProtocol] Starting Coyote > AJP/1.3 on ajp-127.0.0.1-8009 13:44:59,459 INFO [ServerImpl] JBoss > (Microcontainer) [5.0.0.GA (build: SVNTag= JBoss_5_0_0_GA > date=200812041714)] Started in 1m:21s:233ms But Still I cant connect to It when I Type my IP address in my browser thanks

    Read the article

  • Syntax Recognition for XML-Based Languages in Oracle JDeveloper

    - by Ramkumar Menon
      @Thanks Jeffrey Stephenson If you are looking at using any one of the new XML Based languages, lets say a docbook xml, or xproc, or what not, you can make use of JDeveloper's syntax highlighting and completion insight feature to ease out those extra keystrokes. All you need is a URL/local copy of the XML Schema for the language. Once you have, you can register it via Tools --> Preferences --> XML Schemas.   Remember to provide a new extension name [Using a default .xml extension did not work for me.] I provided my own extension .dbk for my docbook files. Once you save these settings, you can create new files that conform to the schema, and you get validation/completion insight/prompting for free.      

    Read the article

  • Unable to set background image using python (2.7.3), bash and gnome3

    - by malon
    #!/usr/bin/env python import os bashCommand = "gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/" + pic_name print bashCommand os.system(bashCommand) Print result: gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/wallpaper-1252048.jpg Copying and pasting the print result into a terminal makes the change successfully, so the command is correct, but os.system isn't processing the request correctly for some reason. In the full script (posted below), I use os.system for a different reason immediately before (wget) and that works fine. Thank you! Full script: http://pastebin.com/R90GTmBZ

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • Software center cannot install or remove software

    - by user963386
    I have Ubuntu 12.10 and when I try to install a new software using the software center, it fails with the following error message: Authentication Error Software cannot be installed or removed because the authentication service is not available.(org.freedesktop.PolicyKit.Error.Failed:("system-bus-name",{name:1.475}).org.debian.apt.install-or-remove-packages This is a new problem that I did not have before! Any suggestions?

    Read the article

  • Cannot install gnome shell extensions

    - by gnome
    I upgrade from 10.04 to 12.04, install gnome 3 and remove unity. My gnome version is 3.4.1. The GNOME Shell extensions is installed and enabled. When I use firefox to visit each extension page in https://extensions.gnome.org/, for example, https://extensions.gnome.org/extension/5/alternative-status-menu/, there is no where to install the extensions. I am told that one can install extensions by visite these extension pages. This is the guide I followed.

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • Using VS12 to create and manage an Azure-SQL DB (simple tasks)

    - by Konrad Viltersten
    On occasion, I'm in a project where I need to store some information in an external DB. Usually, I create one in Azure and run some scripts that I adapt (the usual create table, create login etc.). It just struck me that there might (and definitely should) be a tool in VS that allows me to create a project for my DB, pull out some boxes to create a model of a DB schema, execute a script or two on it (possibly virtual or temporary) and then somehow push it up the cloud. Haven't found such a tool. Is there one and how do I get to it? NB. I'm not looking for an optimized or well structured schema (that's what the DB pros are for at a later stage). I'm not a DB guy nor do I aspire to become one (too old, hehe). I'll probably be satisfied with a Q&D approach.

    Read the article

  • How to stream H264 Video from camera over FTP?

    - by Jay
    I bought a h264 security camera system last year and set it up to ftp video to my computer. I was able to get the video to play (even though it played a little fast) on Ubuntu 11.04 using mplayer. A few months ago, I did a fresh install of 12.04 and I cannot seem to get the video to play with mplayer, smplayer or VLC. I have the restricted formats video packages installed and when playing with any of the players, all I get is a gray video. When calling mplayer from the command line to play the video with no options, I get a lot of these errors: [h264 @ 0x7f278c61f280]concealing 1320 DC, 1320 AC, 1320 MV errors No pts value from demuxer to use for frame! pts after filters MISSING I'm not a video expert and have been coming up with a lot of dead ends when Googling for this. Could someone offer some advice about how to play these videos? Here is the output of mediainfo for a sample file. mediainfo -f sec-cam01-m-20120921-212454.h264 General Count : 278 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Video_Format_List : AVC Video_Format_WithHint_List : AVC Codecs Video : AVC Complete name : sec-cam01-m-20120921-212454.h264 File name : sec-cam01-m-20120921-212454 File extension : h264 Format : AVC Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Format/Extensions usually used : avc h264 Commercial name : AVC Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec/Extensions usually used : avc h264 File size : 1097315 File size : 1.05 MiB File size : 1 MiB File size : 1.0 MiB File size : 1.05 MiB File size : 1.046 MiB File last modification date : UTC 2012-09-22 01:27:12 File last modification date (local) : 2012-09-21 21:27:12 Video Count : 205 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Commercial name : AVC Format profile : [email protected] Format settings : 1 Ref Frames Format settings, CABAC : No Format settings, CABAC : No Format settings, ReFrames : 1 Format settings, ReFrames : 1 frame Format settings, GOP : M=1, N=3 Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Family : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec profile : [email protected] Codec settings : 1 Ref Frames Codec settings, CABAC : No Codec_Settings_RefFrames : 1 Width : 704 Width : 704 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.467 Display aspect ratio : 3:2 Standard : NTSC Resolution : 8 Resolution : 8 bits Colorimetry : 4:2:0 Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 Bit depth : 8 bits Scan type : Progressive Scan type : Progressive Interlacement : PPF Interlacement : Progressive Edit: Here is a sample video using the same encoding: https://www.dropbox.com/s/l5acwzy8rtqn9xe/sec-cam08-m-20121118-105815.h264 (not the same video as mediainfo output)

    Read the article

  • Installing Glassfish 3.1 on Ubuntu 10.10 Server

    - by andand
    I've used the directions here to successfully install Glassfish 3.0.1 on an virtualized (VirtualBox and VMWare) Ubuntu 10.10 Server instance without any real difficulty not resolved by more closely following the directions. However when I try applying them to Glassfish 3.1, I seem to keep getting stuck at section 6. "Security configuration before first startup". In particular, there are some differences I noted: 1) There are two keys in the default keystore. The 's1as' key is still there, but another named 'glassfish-instance' is also there. When I saw this, I deleted and recreated them both along with a 'myAlias' key which I was going to use where needed. 2) When turning the security on it seems like part of the server thinks it's on, but others don't. For instances: $ /home/glassfish/bin/asadmin set server-config.network-config.protocols.protocol.admin-listener.security-enabled=true server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command set executed successfully. $ /home/glassfish/bin/asadmin get server-config.network-config.protocols.protocol.admin-listener.security-enabled server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command get executed successfully. $ /home/glassfish/bin/asadmin --secure list-jvm-options It appears that server [localhost:4848] does not accept secure connections. Retry with --secure=false. javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Command list-jvm-options failed. $ /home/glassfish/bin/asadmin --secure=false list-jvm-options -XX:MaxPermSize=192m -client -Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder -XX: UnlockDiagnosticVMOptions -Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed -Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy -Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Xmx512m -Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks -Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks -Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.in stanceRoot}/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command -Dosgi.shell.telnet.port=6666 -Dosgi.shell.telnet.maxconn=1 -Dosgi.shell.telnet.ip=127.0.0.1 -Dgosh.args=--nointeractive -Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/ -Dfelix.fileinstall.poll=5000 -Dfelix.fileinstall.log.level=2 -Dfelix.fileinstall.bundles.new.start=true -Dfelix.fileinstall.bundles.startTransient=true -Dfelix.fileinstall.disableConfigSave=false -XX:NewRatio=2 Command list-jvm-options executed successfully. Also the admin console responds only to http (not https) requests. Thoughts?

    Read the article

  • New licensing for SQL Server 2012 and #BISM #Tabular usage

    - by Marco Russo (SQLBI)
    Last week Microsoft announced a new licensing schema for SQL Server 2012. If you are interested in an extensive discussion of the new licensing scheme, Denny Cherry wrote a great blog post about that. I’d like to comment about the new BI Edition license. Teo Lachev already commented about the numbers and I agree with him. I generally like the new licensing mode of SQL 2012. It maintains a very low-entry barrier for SSRS/SSAS/SSIS (Standard Edition). It has a reasonable licensing schema for 20-50...(read more)

    Read the article

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • CodePlex Daily Summary for Tuesday, July 29, 2014

    CodePlex Daily Summary for Tuesday, July 29, 2014Popular ReleasesArtezio SharePoint 2013 Workflow Activities: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityDatabase Schema Reader: v1.3.4.0: Contents DatabaseSchemaReader.dll - Class library (.net3.5) DatabaseSchemaViewer.exe - UI to read and view database schemas; options to generate SQL and code; option to compare another schema CopyToSQLite/CopyToSQLite.exe - UI to copy any database schema and data to SQLite or, if installed, SQL Server CE 4.0 net4/DatabaseSchemaReader.dll - .Net 4.0 class library This Release Schema Reading (MySQL)- reads UNSIGNED integers (available in the schema model's DatabaseColumn.DbDataType prope...TEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleQ2Cue \\: Q2Cue v. 0.85: Initial release.CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...NRepository: NRepository 2.2: NuGetNRepository packages are available from NuGet.org: NRepository.Core: Install-Package NRepository.Core NRepository.EntityFramework: Install-Package NRepository.EntityFramework NRepository.TestKit: Install-Package NRepository.TestKit Contoso University Sample Projects Added 3 versions of Contoso University sample code added: ContosoUniversityOriginal - This is the original Microsoft sample project for Entity Framework. ContosoUniversityUsingNRepository - This version simply repl...InstagramSaver: 1.5: Added: An option to wait between downloads to prevent ban from server Improved: File downloading performance Improved: File checking performancePVReportDeployer: PVReportDeployer 1.0: PVReportDeployer 1.0AD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.20: AD4.Iteration.23.20(Advanced Rendering Features) Option to render name of flow pins within flow chart (AlarmClockSample.10.ad4 used to test this extension): MainWindow extended by adding FlowChartShowFlowPinCaptionsComboBox ConfigureUIControls customized ManageFlowChartShowFlowPinCaptionsOptionFlow customized Parameter FlowChartFlowPins replaced FlowChartFlowPinCaptions in AD4.AppDesigner.cfg RenderFlowPinsCaptions customized to use the parameter ToDo: Some tutorials are unfinished b...CRM 2013 One Click Navigation: CRM2013OneClickNavigation_1_0_0_3_managed.zip: Fixes on Security PrivilegesAutomatedLab: AutomatedLab 2.1.0.0: 2.1.0 Support for external virtual switches CaRoot is a new role for installing Root Certificate Authorities Root domain controllers are installed in parallel now 2.0.0 Now supports also Windows Server 2008 R2 and 7 No longer the limit of just having one client and one server operating system Rewrite of the code that handles VHDs Lots of bug fixing Changed that parameter definition of some cmdlets in AutomatedLabDefinition and also adapted the sample scripts Now the Forest...BizTalk Deployment Utilities: 1.0.0.0: Draft VersionGroupMe Software Development Kit: GroupMe .NET SDK v1.0.1: Small update concerning the /users/me endpoint to get the currently authenticated user info. Documentation is available here: http://dotnetcorner.ch/Projects/GroupMe-NET-SDK/DocumentationOneNote Tagging Kit: OneNoteTaggingKit Add-In 2.4.5316.33833: New Features in this ReleaseThe Find Pages dialog preserves search string and tag filter selection on search scope changes Other ChangesDialog windows restored when action button is pressed while windows is minimized Code cleanup and simplification Code documentation Created some reusable UI controls Tags styling made more pleasant (I hope) Tag tooltips improved on the Find Pages dialog : Added dedicated tooltips to the page count and selection indicator of tags Closing open dial...New ProjectsCognitum ASP.NET Providers for Cassandra: Cassandra Providers is a ASP.NET solution that uses the Cassandra Database data source for a custom Membership, Role, Profile, an Session-State providers.DG Mobile: Simples aplicação para inserção, visualização e emissão de relatório de atividades relacionadas ao DG.Dynamics AX Development tools: Several AX Development related tools combined into one configurable packages: TFS Workspace, DEV_Tools, and X++ Editor components.E Drawing Library for WinRT: E Drawing Library for WinRT lets you create graphic rich Windows Store Apps easily, giving you all the power of Direct2D with simple classes and methods.EFramework.DataAccess: Summary ????freetalkserver: This php server, user can register!gicon: gicon is a project to help you to generate avatar/icon by hash (MD5, SHA1) or random hex string (such as GUID)hOOt - Full text search: Full text search engine built from scratch using bitmap indexesMelodyUI - Library for HTML, CSS & JS Development: MelodyUI is a set of classes and helpers that let us to create Web Applications. MelodyUI is lightweight and don't have any dependency.nRF24L01Plus .NETMF driver: nRF24L01, .NET MFNyan: Beginning...Prince Game Xna: Prince of Persia Net Game in Xna Game StudioSmartsby: Combining several small business web app's information into a single dashboard.Super snippets: Super snippets para Visual StudioTCP/IP Adapter BizTalk 2013: BizTalk 2013 TCP/IP community adapter. This is an updated version of the TCP/IP adapter for BizTalk, for use with BizTalk 2013.testMercurial: summaryUnit OF Work With Sql Helper: This project is intended to provide a simple data access object for SQL Server with repository and Unit of Work pattern.VisualRegEx: Prototype. Experimenting regex parsing, graph layout, etc.WDK Hardware Development Boards Add-On: Scripts for working with windows compatible hardware development boards like the Sharks Cove?????? 2014????????: ???????????????、????、????、??????、????、????????,????????、?????????,?????。?????? 2014????????: ????????????,????,??????????? ???? ???? ?????????,???,??,?????!???? 2014????????: ?????????????、????、????、??????、????、???????,?????,?????????!????? 2014???????: ????????????????,?????????????。????????????,???????,???????,?????,?????????? 2014????????: ?????????????????、????、??????、????????,????????????,???????????!??????? 2014???????: ??????????????????,??????????????、???????、???????、???????、?????!??????? 2014????????: ?????????????????????,????????????????,????????????????,??????!????? 2014????????: ?????????????????????????,???????????,????????,??????????????????????????? 2014????????: ??????????????????????,????“???????,???????”?????,????????????!?????? 2014???????: ?????????????????,????,????,??????,????“????、????、????、????”????????,??????.?????? 2014????????: ?????????????????"????,????"???,????????????????????????,??????????????。?????? 2014????????: ??????????????:????,????,????,???????,????????,??????:????????,?????!???? 2014????????: ????????????????????????,???????????????,????????????????!???? 2014?????????: ??????????????????,?????????????,???????????.????????????,????????????!?????? 2014??????: ??????????????:????,????,????,???????,????????,??????:????????,?????!?????? 2014??????: ?????????????????,???????????????。?????????????,???????,?????????。????? 2014???????: ????????????????"????,????"???,????????????????????????,??????????????。????? 2014??????: ???????????????????,?????????????,???????????.????????????,????????????!????? 2014???????: ?????????????????????????,???????????????,????????????????!?????? ????????: ??????????????300??,????????、???????、????、????????、?????,??????:????,????,???????!?????? ????????: ??????????????????,?????,???,???、???、?????,???,?????,??????????????????? ???????????: ??????????????????,?????????。????????????,??????,????,????,?????????!????? ???????: ????????????????????????,?????、???、????,????,???、???、???、???、???,????,?????!????? ???????: ????????????????????,????????,????????,????,????,??????,???????!????? ???????: ???????????????,??????????????????,????????,??????????????、????。?????? 2014????????: ????????????????????,????????:??、??、???,?????????????????????!?????? ????????: ????????????????6?,???????????????????????????,??????????????,?????????!?????? ??????: ????????????????8?,????????,????????,??????????,?????,????? ,????????!?????? ???????: ?????????????????????????????、????、????、???????????,????,????!????? 2014???????: ???????????、???、??、??????????????????????????????,????????????????!?????? 2014????????: ????????????????????????、??????,????、?????、????, ?????????,?????????????!?????? ????????: ?????????????????????、????、????、??????、???????,??????、??????。?????? ???????: ??????????????????,???????、????、????、??????、???????,??????,???????????。???? ????????: ??????,??????,?????????????????????,???????????????????????。???? ???????: ??????????????,???????、???????????,????????,????,?????????,??????,??????!?????? ????????: ?????????????????,?????????????。?????????,???????,???????,?????????????。?????? ????????: ????????????????、??????、??????、??????、??????、?????、??????、????、????、????????!????? ???????: ???????????????????,??????????????,???????????,??????,????,??????,??????。????? ???????: ?????????????!???????????,??????,????,?????????????,????????,??????,????,?????...?????? ???????: ????????????、??、???????????,??????,????????,??????????????????...?????????? ????????: ??????????????,??????????????,?????????,????,????,??????。???? ?????: ????????????、?????????,?????????,????,????????,????????????????!????? 2014??????????: ???????????????????,?????,???????,???????????,??????! ????? 2014?????????: ???????????????????,?????????????????????,?????,????,???????.

    Read the article

  • Standard way of allowing general XML data

    - by Greg Jackson
    I'm writing a data gathering and reporting application that takes XML files as input, which will then be read, processed, and stored in a strongly-typed database. For example, an XML file for a "Job" might look like this: <Data type="Job"> <ID>12345</ID> <JobName>MyJob</JobName> <StartDate>04/07/2012 10:45:00 AM</StartDate> <Files> <File name="a.jpeg" path="images\" /> <File name="b.mp3" path="music\mp3\" /> </Files> </Data> I'd like to use a schema to have a standard format for these input files (depending on what type of data is being used, for example "Job", "User", "View"), but I'd also like to not fail validation if there is extra data provided. For example, perhaps a Job has additional properties such as "IsAutomated", "Requester", "EndDate", and so on. I don't particularly care about these extra properties. If they are included in the XML, I'll simply ignore them when I'm processing the XML file, and I'd like validation to do the same, without having to include in the schema every single possible property that a customer might provide me with. Is there a standard way of providing such a schema, or of allowing such a general XML file that can still be validated without resorting to something as naïve (and potentially difficult to deal with) as the below? <Data type="Job"> <Data name="ID">12345</Data> . . . <Data name="Files"> <Data name="File"> <Data name="Filename">a.jpeg</Data> <Data name="path">images</Data> . . . </Data> </Data>

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >