Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 541/2464 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • Customisation / overriding of the Envelop ecs files

    - by Dheeraj Kumar M
    There are few usecases where the requirement is to customise the envelop information (Interchange/Group ecs file). Such scenarios might be required to be used for only few of the customers. Hence, in addition to the default seeded envelop definitions, it also required to upload the customised definitions. Here is the steps for achieving the same. 1. Create only the Interchange ecs and save 2. Create only the group ecs and save 3. Use the same in B2B 1. Create only the Interchange ecs and save :       Open the document editor and select the required version and doctype. During creating new ecs, ensure to select the checkbox for insert envelop.       Once created, delete the group and transactionset nodes and retain only the Interchange ecs nodes, including both header and trailer. Save this file. 2. Create only the group ecs and save       After creating the ecs file as mentioned in steps of Interchange creation, delete the Interchange and transactionset nodes and retain only the group ecs nodes, including both header and trailer. Save this file. 3. Use the same in B2B       These newly created ecs can be used in B2B by 2 ways.              a. By overriding at the trading partner Level:              This will be very useful when the configuration is complete and then need to incorporate the customisation. In this case, just select the Trading partner - document - select the document which need to be customised.              Upload the newly created Interchange and group ECS files under the Interchange and group tabs respectively and re-deply the associated agreement.              The advantage of this approach is              - Flexibility to add customised envelop definitions to the partners              - Save the re-work of design time effort.              b. By adding another document definition in Administration - document screen:              This scenario can be used if there is no configuration done at the trading partner level. Create the required document revision and overtide the Interchange and group ECS files under the Interchange and group tabs respectively. Add the document in Trading partner - document. Create and deploy the agreements

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

  • Lessons learned from Word 2007 automation with c# 2008

    - by robertphyatt
    My organization has an ongoing project to take documents produced for internal regulations and such, change some of the formatting and then export it as PDF. Our requirements were that only one person would be doing this, but it has been painfully tedious and sometimes error-prone to do by hand. Enter the fearless developer to automate the situation! Since I am one of those guys that just plain does not like VB, I wanted to do the automation in the ever-so-much-more-familiar C#. While Microsoft had made a dll that makes such a task easier, documentation on MSDN is pretty lame and most of the forumns and posts on the internet had little to do with my task. So, I feel like I can give back to the community and make a post here of the things I have learned so far. I hope this is helpful to whoever stumbles upon it. Steps to do this: 1) First of all, make some sort of a project and use some sort of a means to get the filename of the word document you are trying to open. I got the filename the user wanted with an openFileDialog tied to a button that I labeled 'Browse':        private void btnBrowse_Click(object sender, EventArgs e)        {            try            {                DialogResult myResult = openFileDialog1.ShowDialog();                if (myResult.Equals(DialogResult.OK))                {                    if (openFileDialog1.SafeFileName.EndsWith(".doc"))                    {                        txtFileName.Text = openFileDialog1.SafeFileName;                        paramSourceDocPath = openFileDialog1.FileName;                        paramExportFilePath = openFileDialog1.FileName.Replace(".doc", ".pdf");                    }                    else                    {                        txtFileName.Text = "only something that end with .doc, please";                    }                }            }            catch (Exception err)            {                lblError.Text = err.Message;            }        }   2) Add in "using Microsoft.Office.Interop.Word;" after setting your project to reference Microsoft.Office.Core and Microsoft.Office.Interop.Word so that you don't have to add "Microsoft.Office.Interop.Word" to the front of everything. 3) Now you are ready to play. You will need to have a copy of word open and a copy of your word document that you want to modify open to be able to make the changes that are needed. The word interop dll likes using ref on all the parameters passed in, and likes to have them as objects. If you don't want to specify the parameter, you have to give it a "Type.Missing". I suggest creating some objects that you reuse all over the place to maintain sanity. object paramMissing = Type.Missing; ApplicationClass wordApplication = new ApplicationClass(); Document wordDocument = wordApplication.Documents.Open(                ref paramSourceDocPath, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing); 4) There are many ways to modify the text of the inside of the word document. One of the ways that was most effective for me was to break it down by paragraph and then do things on each paragraph by what style the particular paragraph had.            foreach (Paragraph thisParagraph in wordDocument.Content.Paragraphs)            {                string strStyleName = ((Style)thisParagraph.get_Style()).NameLocal;                string strText = thisParagraph.Range.Text;                //Do whatever you need to do            } 5) Sometimes you want to insert a new line character somewhere in the text or insert text into the document, etc.  There are a few ways you can do this: you can either modify the text of a paragraph by doing something like this ('\r' makes a new paragraph, '\v' will make a newline without making a new paragraph. If you remove a '\r' from the text, it will eliminate the paragraph you removed it from): thisParagraph.Range.Text = "A\vNew Paragraph!\r" + thisParagraph.Range.Text; OR you could select where you want to insert it and have it act like you were typing in Word like any normal user (note: if you do not collapse the range first, you will overwrite the thing you got the range from) object oCollapseDirectionEnd = WdCollapseDirection.wdCollapseEnd; object oCollapseDirectionStart = WdCollapseDirection.wdCollapseStart; Range rangeInsertAtBeginning = thisParagraph.Range; Range rangeInsertAtEnd = thisParagraph.Range; rangeInsertAtBeginning.Collapse(ref oCollapseDirectionStart); rangeInsertAtEnd.Collapse(ref oCollapseDirectionEnd); rangeInsertAtBeginning.Select(); wordApplication.Selection.TypeText("Blah Blah Blah"); rangeInsertAtEnd.Select(); wordApplication.Selection.TypeParagraph(); 6) If you want to make text columns, like a newspaper or newsletter, you have to modify the page layout of the document or a section of the document to make it happen. In my case, I only wanted a particular section to have that, and I wanted to have a black line before and after the newspaper-like text columns. First you need to do a section break on either side of what you wanted, then you take the section and modify the page layout. Then you can modify the borders of the section (or another object in the word document). I also show here how to modify the alignment of a paragraph.            object oSectionBreak = WdBreakType.wdSectionBreakContinuous;            //These ranges were set while I was going through the paragraphs of my document, like I was showing earlier            rangeHeaderStart.InsertBreak(ref oSectionBreak);            rangeHeaderEnd.InsertBreak(ref oSectionBreak);            //change the alignment to justify            object oRangeHeaderStart = rangeStartJustifiedAlignment.Start;            object oRangeHeaderEnd = rangeHeaderEnd.End;            Range rangeHeader = wordDocument.Range(ref oRangeHeaderStart, ref oRangeHeaderEnd);            rangeHeader.Paragraphs.Alignment = WdParagraphAlignment.wdAlignParagraphJustify;            //find the section break and make it into triple text columns            foreach (Section mySection in wordDocument.Sections)            {                if (mySection.Range.Start == rangeHeaderStart.Start)                {                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    //I didn't like the default spacing and column widths. This is how I adjusted them.                    foreach (TextColumn txtc in mySection.PageSetup.TextColumns)                    {                        try                        {                            txtc.SpaceAfter = 151.6f;                            txtc.Width = 7;                        }                        catch (Exception)                        {                            txtc.Width = 151.6f;                        }                    }                }            } That is all  I have time for today! I hope this was helpful to someone!

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • How to update model in the database, from asp.net MVC2, using Entity Framework?

    - by Eedoh
    Hello. I'm building ASP.NET MVC2 application, and using Entity Framework as ORM. I am having troubles updating object in the database. Every time I try entity.SaveChanges(), EF inserts new line in the table, regardless of do I want update, or insert to be done. I tried attaching (like in this next example) object to entity, but then I got {"An object with a null EntityKey value cannot be attached to an object context."} Here's my simple function for inserts and updates (it's not really about vehicles, but it's simpler to explain like this, although I don't think that this effects answers at all)... public static void InsertOrUpdateCar(this Vehicles entity, Cars car) { if (car.Id == 0 || car.Id == null) { entity.Cars.AddObject(car); } else { entity.Attach(car); } entitet.SaveChanges(); } I even tried using AttachTo("Cars", car), but I got the same exception. Anyone has experience with this?

    Read the article

  • Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?

    - by Jonathan Leffler
    Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif If the command were run with '-DNAME1', the output would be: #define ALBUQUERQUE "ambidextrous" If the command were run with '-UNAME1', the output would be: #define PHANTASMAGORIA "ghostly" If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ I'd like to run the command with -DUSE_VOID -UPLATFORM1 and get the output: #undef VOID typedef void VOID; typedef void * VOIDPTR; Another example: #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ Ideally, I'd like to run with -UOLDUNIX and get the output: #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read #include'd files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) Status Report (one year later) After a year of use, I am very happy with 'sunifdef' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) and run with '-UC' (C is never defined), the output is: #if defined(A) && defined(B) || defined(D) && defined(E) This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: #if (defined(A) && defined(B)) || (defined(D) && defined(E)) However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called Coan that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.

    Read the article

  • How to change the data in Telerik's RadGrid based on Calendar's selected dates?

    - by Jronny
    I was creating another usercontrol with Telerik's RadGrid and Calendar. <%@ Register Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" TagPrefix="telerik" %> <table class="style1"> <tr> <td>From</td> <td>To</td> </tr> <tr> <td><asp:Calendar ID="Calendar1" runat="server" SelectionMode="Day"></asp:Calendar></td> <td><asp:Calendar ID="Calendar2" runat="server" SelectionMode="Day"></asp:Calendar></td> </tr> <tr> <td><asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" /></td> <td><asp:Button ID="btnClear" runat="server" Text="Clear" OnClick="btnClear_Click" /></td> </tr> </table> <telerik:RadGrid ID="RadGrid1" runat="server"> <MasterTableView CommandItemDisplay="Top"></MasterTableView> </telerik:RadGrid> and I am using Linq in code-behind: Entities1 entities = new Entities1(); public static object DataSource = null; protected void Page_Load(object sender, EventArgs e) { if (DataSource == null) { DataSource = (from entity in entities.nsc_moneytransaction select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }).OrderByDescending(a => a.date); } BindData(); } public void BindData() { RadGrid1.DataSource = DataSource; } protected void btnSubmit_Click(object sender, EventArgs e) { DateTime startdate = new DateTime(); DateTime enddatedate = new DateTime(); if (Calendar1.SelectedDate != null && Calendar2.SelectedDate != null) { startdate = Calendar1.SelectedDate; enddatedate = Calendar2.SelectedDate; var queryDateRange = from entity in entities.nsc_moneytransaction where DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) >= DateTime.Parse(startdate.ToShortDateString()) && DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) <= DateTime.Parse(enddatedate.ToShortDateString()) select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = queryDateRange.OrderByDescending(a => a.date); } else if (Calendar1.SelectedDate != null) { startdate = Calendar1.SelectedDate; var querySetDate = from entity in entities.nsc_moneytransaction where entity.transaction_date.Value == startdate select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = querySetDate.OrderByDescending(a => a.date); ; } BindData(); } protected void btnClear_Click(object sender, EventArgs e) { Calendar1.SelectedDates.Clear(); Calendar2.SelectedDates.Clear(); } The problems are, (1) when I click the submit button. the data in the RadGrid is not changed. (2) how can we check if there is nothing selected in the Calendar controls, because there is a date (01/01/0001) set even if we do not select anything from that calendar, thus Calendar1.SelectedDate != null is not enough. =( Thanks.

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • How do I convert some ugly inline javascript into a function?

    - by Taylor
    I've got a form with various inputs that by default have no value. When a user changes one or more of the inputs all values including the blank ones are used in the URL GET string when submitted. So to clean it up I've got some javascript that removes the inputs before submission. It works well enough but I was wondering how to put this in a js function or tidy it up. Seems a bit messy to have it all clumped in to an onclick. Plus i'm going to be adding more so there will be quite a few. Here's the relevant code. There are 3 seperate lines for 3 seperate inputs. The first part of the line has a value that refers to the inputs ID ("mf","cf","bf","pf") and the second part of the line refers to the parent div ("dmf","dcf", etc). The first part is an example of the input structure... echo "<div id='dmf'><select id='mf' name='mFilter'>"; This part is the submit and js... echo "<input type='submit' value='Apply' onclick='javascript: if (document.getElementById(\"mf\").value==\"\") { document.getElementById(\"dmf\").innerHTML=\"\"; } if (document.getElementById(\"cf\").value==\"\") { document.getElementById(\"dcf\").innerHTML=\"\"; } if (document.getElementById(\"bf\").value==\"\") { document.getElementById(\"dbf\").innerHTML=\"\"; } if (document.getElementById(\"pf\").value==\"\") { document.getElementById(\"dpf\").innerHTML=\"\"; } ' />"; I have pretty much zero javascript knowledge so help turning this in to a neater function or similar would be much appreciated.

    Read the article

  • Spring 3 simple extentionless url mappings with annotation-based mapping - impossible?

    - by caerphilly
    Hi, I'm using Spring 3, and trying to set up a simple web-app using annotations to define controller mappings. This seems to be incredibly difficult without peppering all the urls with *.form or *.do Because part of the site needs to be password protected, these urls are all under /secure. There is a <security-constraint> in the web.xml protecting everything under that root. I want to map all the Spring controllers to /secure/app/. Example URLs would be: /secure/app/landingpage /secure/app/edit/customer/{id} each of which I would handle with an appropriate jsp/xml/whatever. So, in web.xml I have this: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/secure/app/*</url-pattern> </servlet-mapping> And in despatcher-servlet.xml I have this: <context:component-scan base-package="controller" /> In the Controller package I have a controller class: package controller; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; import javax.servlet.http.HttpServletRequest; @Controller @RequestMapping("/secure/app/main") public class HomePageController { public HomePageController() { } @RequestMapping(method = RequestMethod.GET) public ModelAndView getPage(HttpServletRequest request) { ModelAndView mav = new ModelAndView(); mav.setViewName("main"); return mav; } } Under /WEB-INF/jsp I have a "main.jsp", and a suitable view resolver set up to point to this. I had things working when mapping the despatcher using *.form, but can't get anything working using the above code. When Spring starts up it appears to map everything correctly: 13:22:36,762 INFO main annotation.DefaultAnnotationHandlerMapping:399 - Mapped URL path [/secure/app/main] onto handler [controller.HomePageController@2a8ab08f] I also noticed this line, which looked suspicious: 13:25:49,578 DEBUG main servlet.DispatcherServlet:443 - No HandlerMappings found in servlet 'dispatcher': using default And at run time any attempt to view /secure/app/main just returns a 404 error in Tomcat, with this log output: 13:25:53,382 DEBUG http-8080-1 servlet.DispatcherServlet:842 - DispatcherServlet with name 'dispatcher' determining Last-Modified value for [/secure/app/main] 13:25:53,383 DEBUG http-8080-1 servlet.DispatcherServlet:850 - No handler found in getLastModified 13:25:53,390 DEBUG http-8080-1 servlet.DispatcherServlet:690 - DispatcherServlet with name 'dispatcher' processing GET request for [/secure/app/main] 13:25:53,393 WARN http-8080-1 servlet.PageNotFound:962 - No mapping found for HTTP request with URI [/secure/app/main] in DispatcherServlet with name 'dispatcher' 13:25:53,393 DEBUG http-8080-1 servlet.DispatcherServlet:677 - Successfully completed request So... Spring maps a URL, and then "forgets" about that mapping a second later? What is going on? Thanks.

    Read the article

  • Extending MySQLi

    - by FRKT
    Hello, I've run into problems extending the MySQLi class. It won't let me add any properties. class MySQLii extends MySQLi { public $database; public function MySQLii($host, $username, $password, $database){ // Initialize MySQLi parent::MySQLi($host, $username, $password, $database); // Save database name $this->database = $database; } } $mysqlii = new MySQLii('localhost', 'root', 'password', 'database'); var_dump($mysqlii); object(MySQLii)#1 (17) { ["affected_rows"]= int(0) ["client_info"]= string(48) "mysqlnd 5.0.5-dev - 081106 - $Revision: 289630 $" ["client_version"]= int(50005) ["connect_errno"]= int(0) ["connect_error"]= NULL ["errno"]= int(0) ["error"]= string(0) "" ["field_count"]= int(0) ["host_info"]= string(42) "MySQL host info: Localhost via UNIX socket" ["info"]= NULL ["insert_id"]= int(0) ["server_info"]= string(6) "5.1.44" ["server_version"]= int(50144) ["sqlstate"]= string(5) "00000" ["protocol_version"]= int(10) ["thread_id"]= int(4019) ["warning_count"]= int(0) } Note the absence of the database property I added in the MySQLii constructor. Am I missing something?

    Read the article

  • How can I copy a SQL record which has related records in other tables to the same database?

    - by DerekVS
    Hi. I created a function in C# which allows me to copy a record and its related children to a new record and new related children in the same database. (This is for an application that allows the use of previous work as a template for new work.) Anyway, it works great... Here's a description of how it accomplishes the copy: It populates a two-column memory-based look-up table with the current primary key of each record. Next, as it individually creates each new copy record, it updates the look-up table with the Identity PK of the new record [retrieved from SCOPE_IDENTITY()]. Now, when it copies over any related children, it can look up the new parent PK to set the FK on the new record. In testing, it only took a minute to copy a relational structure on a local instance of SQL Server 2005 Express Edition. Unfortunately it is proving to be horribly slow in production! My users are dealing with 60,000+ records per parent record over the LAN to our SQL Server! While my copy function still works, each of those records represents an individual SQL UPDATE command and it loads the SQL Server at about 17% CPU from its normal 2% idle. I just finished testing a 50,000 record copy and it took almost 20 minutes! Is there a way to duplicate this functionality in SQL queries or stored procecures to make the SQL server do all of the copy work instead of blasting it over the LAN from each client? (We're running Microsoft SQL Server 2005 Standard Edition.) Thanks! -Derek

    Read the article

  • mongodb: insert if not exists

    - by LeMiz
    Hello, Every day, I receive a stock of documents (an update). What I want to do is inserting each of them if it does not exists. I also want to keep track of the first time I inserted them, and the last time I saw them in an update. I don't want to have duplicate documents. I don't want to remove a document which has previously been saved, but is not in my update. 95% (estimated) of the records are unmodified from day to day. I am using the python driver (pymongo), for that matter. What I currently do is (pseudo-code): for each document in update: existing_document = collection.find_one(document) if not existing_document: document['insertion_date'] = now else: document = existing_document document['last_update_date'] = now my_collection.save(document) My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update). I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... ( http://www.mongodb.org/display/DOCS/Updating ) Can someone give an advice on doing it faster ?

    Read the article

  • Accessing native iPhone addressbook database and performing add and delete contacts?

    - by chaitanya
    Hi, In my application I need to implement the address book which should contains the native addressbook details, and the user should be able to add and delete from the address book and it should be updated in the native iphone addressbook. I read somewhere that the iphone native address book database is accesible. In documentation also I saw that addContact and Delete API's are exposed to addressbook. Can anyone please tell me how can I access the native AddressBook of the iphone, and.. how to add and delete contacts from the address book? Can anyone post the sample code for this?

    Read the article

  • How can I line up WPF items in a Horizontal WrapPanel so they line up based on an arbitrary vertical

    - by Scott Whitlock
    I'm trying to create a View in WPF and having a hard time figuring out how to set it up. Here's what I'm trying to build: My ViewModel exposes an IEnumerable property called Items Each item is an event on a timeline, and each one implements ITimelineItem The ViewModel for each item has it's own DataTemplate to to display it I want to display all the items on the timeline connected by a line. I'm thinking a WrapPanel inside of a ListView would work well for this. However, the height of each item will vary depending on the information it displays. Some items will have graphic objects right on the line (like a circle or a diamond, or whatever), and some have annotations above or below the line. So it seems complicated to me. It seems that each item on the timeline has to render its own segment of the line. That's fine. But the distance between the top of the item to the line (and the bottom of the item to the line) could vary. If one item has the line 50 px down from the top and the next item has the line 100 px down from the top, then the first item needs 50 px of padding so that the line segments add up. I think I could solve that problem, however, we only need to add padding if these two items are on the same line in the WrapPanel! Let's say there are 5 items and only room on the screen for 3 across... the WrapPanel will put the other two on the next line. That's ok, but that means only the first 3 need to pad together, and the last 2 need to pad together. This is what's giving me a headache. Is there another approach I could look at?

    Read the article

  • Height of a html window's content (not just the viewport height)

    - by gatapia
    Hi All, I'm trying to get the height of a html window's content. This is the full height of the content not the visible height. I have had some (very limited) success using: document.getElementsByTagName('html')[0].offsetHeight in FireFox. This however fails in IEs and it fails in Chrome when using absolute positioned elements (http://code.google.com/p/chromium/issues/detail?id=38999). A sample html file that can be used to reproduce this is: <html> <head> <style> div { border:solid 1px red; height:2000px; width:400px; } .broken { position:absolute; top:0; left:0; } .fixed { position:relative; top:0; left:0; } </style> <script language='javascript'> window.onload = function () { document.getElementById('window.height').innerHTML = window.innerHeight; document.getElementById('window.screen.height').innerHTML = window.screen.height; document.getElementById('document.html.height').innerHTML = document.getElementsByTagName('html')[0].offsetHeight; } </script> </head> <body> <div class='fixed'> window.height: <span id='window.height'>&nbsp;</span> <br/> window.screen.height: <span id='window.screen.height'></span> <br/> document.html.height: <span id='document.html.height'></span> <br/> </div> </body> </html> Thanks All Guido Tapia

    Read the article

  • In Reporting Services how to filter drop down parameter list in based on other selected parameter?

    - by Lee Englestone
    Question In a Reporting Services Report, How do I filter a second drop down list of cars to only show cars whose ManufacturerId is equal the selected Manufacturer (from the first drop down list)? Report Datasets I have 2 datasets. Dataset 1. A list of Manufacturers. From a stored procedure Report_Manufacturers_P Dataset 2. A list of Cars, including a column called Manufacturers id. From a stored procedure Report_Cars_P Report Parameters On the Report I have 2 Parameters. Parameter 1. ManufacturerId. Set from A drop down list of Manufacturers (DataSet 1). Parameter 2. CarId. Set from A drop down list of Cars (DataSet 2). I've tried.. Creating another sproc called Report_Manufacturer_Cars_P that takes the ManufacturerId as an integer and returns a list of cars made by that manufacturer. Any Ideas. As selecting a Manufacturer doesn't seem to want to kick off anything that filters the Car list? Thanks in advance, -- Lee

    Read the article

  • Is it possible to mix MEF and Unity within a MEF-based plugin?

    - by Dave
    I'm finally diving into Unity head first, and have run into my first real problem. I've been gradually changing some things in my app from being MEF-resolved to Unity-resolved. Everything went fine on the application side, but then I realized that my plugins were not being loaded. I started to look into this issue, and I believe it's a case where MEF and Unity don't mix. Plugins are loaded by MEF, but each plugin needs to get access to the shared libraries in my application, like app preferences, logging, etc. Initially, my plugin constructor had the ImportingConstructor attribute. I then replaced it with InjectionConstructor so that Unity could resolve its shared library dependencies. But because I did that, MEF no longer loaded it! Then I used both attributes, which compiled, but then I got a composition error (MEF). I figured that this was because the constructor takes a parameter that was once resolved by a MEF Import, so I removed all parameters. As expected, now MEF was able to load my plugin, but because the constructor needs to call into the interface that was once passed in, construction fails. So now I'm at a point where I can get MEF to start to load my plugin, but can't do anything with it because the plugin relies on shared libraries that are registered with Unity. For those of you that have successfully mixed MEF and Unity, how do you go about resolving the references to the shared libraries with Unity?

    Read the article

  • opengl paint program based on Apple's 'glPaint' on a white background - how to blend?

    - by Adam
    Trying to write a simple paint program for iPhone, and I'm using Apple's glPaint sample as a guide. The only problem is, painting doesn't work on a white background, since white + colour = white. I've tried different blending functions, but haven't been able to hit on the right combination of settings and/or brushes to make this work. I've seen similar posts about this problem but no answers. Does anyone know how this might work?

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >