Search Results

Search found 64482 results on 2580 pages for 'sql source control'.

Page 296/2580 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • cfengine3 file_copy only on source side change

    - by megamic
    I am using the 'digest' copy method for all file copy promises, because of the way we package and deploy software, I cant rely on mtime for the criteria for updating files. For various reasons, I am not employing the client-server approach with a central configuration server: rather we package and deploy our entire configuration module to each server, so from cf-engine's perspective, the source and target are local on the server it is running. The problem I am having with this approach is that the source will always update the target when they differ - which is what I want most of the time, usually because the source has been updated. However, like many other cfengine users, we are running an operational environment, where occasionally emergency fixes have to be applied immediately - meaning we don't have time to rebuild and redeploy a configuration module, and the fix will often be applied by deploying a tarball with specific changes. Of course this is problematic if cf-engine comes along 5 mintues later and reverts the changes. What we would like is to be able to make small, incremental changes to our servers, without them being reverted, until the next deployment cycle at which time the new source files would be copied. We do not consider random file corruption or mistaken changes to involve enough risk to warrant having cfengine constantly revert deployments to their source copy - the ability to deploy emergency fixes and have them stay that way until the next deployment would be of much greater value and utility. So, after all that, my question is this: is cf-engine capable of detecting whether it was the source or target that changed when the files differ, and if so, is their a way to use the 'digest' copy method but only if the source side changed? I am very open to other ideas and approaches as-well, as I am still quite new to this whole configuration management thing.

    Read the article

  • How do i setup a Window server 2008 R2 + SQL server 2008 VPS ?

    - by Spencer Lim
    I wish to deploy a trusted apps at the secured way... i got one empty VPS (no operating system) but i don't know how could i install Window server 2008 R2 and SQL server 2008 the version i got is genuine enterprise/ datacenter and sql enterprise the main purpose is used to deploy ASP.Net v4 MVC 2 and XBAP Apps + LINQ also use SQL server for my window application with someway to make it able to remote access May i know anyone here could teach me or introduce some source for me on how to setup the domain, IP, OS and feature all thing, please... i felt confuse here @.@

    Read the article

  • Under which circumstances can a *local* user account access a remote SQL Server with a trusted connection?

    - by Heinzi
    One of our customers has the following configuration: On the domain controller, there's an SQL Server. On his PC (WinXP), he logs on with LocalPC\LocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It works. In SSMS, the connection shows up with the user Domain\Administrator. Firstly, I was surprised that this even works. (My first suspicion was that there is a user with the same name and password in the domain, but there is no user LocalUser in the domain.) Then we tried to reproduce the same behaviour on his new PC, but failed: On his new PC (Win7), he logs on with OtherLocalPC\OtherLocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It fails with the error message Login failed for user ''. The user is not associated with a trusted SQL Server connection. Hence my question: Under which conditions can a non-domain user access a remote SQL Server using Windows Authentication with different credentials? Apparently, it's possible (it works on his old PC), but why? And how can I reproduce it?

    Read the article

  • Is Sql Server 2008 R2 unsupported by Operations Manager (SCOM) 2007 R2?

    - by bwerks
    Hey all, I'm performing a test configuration of System Center Operations Manager 2007 R2, on a system prepared with Sql Server 2008 R2. Unfortunately, the Scom 2007 R2 prerequisites verification program seems to be detecting exact versions of Sql Server, and not simply a minimum version, like it claims: "System Center Operations Manager 2007 R2 requires SQL Server 2005 Standard or Enterprise Edition with SP1 and above or SQL Server 2008 Standard or Enterprise edition with SP1 and above. Note: Operations Manager 2007 R2 does not support a 32-bit Operations Manager Operations database, Reporting Server data warehouse or Audit Collection database on a 64-bit operating system." I had hoped that this was just a helper tool that was assisting in getting me off the ground, but unfortunately it seems as if it's actually used as a gate for the installation to proceed. Has anyone encountered this? If so, is there a way to fool the installer into thinking that it has a proper version, or otherwise alert it to my valid configuration?

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • Need help with setting up MS-SQL on EC2.

    - by Hareem Haque
    I have a large MS-SQL database that i need to send to the aws cloud. The issue is how do i persist my sql data and how to setup MS-SQL cluster using windows AMI. The real issue is that for replication i need to use the private ip's of the instances. However, these ip's are always dynamic and will change on server launch. Any ideas on how i can get rid of this problem. I really appreciate your help Best Regards Hareem Haque

    Read the article

  • How do I set the UNC permissions on a SQL Server 2008 Filestream UNC?

    - by Justin Dearing
    I have a SQL Server 2008 instance. I have configured filestream access properly, and use it from one column on one table in one of my databases. However, I cannot access the UNC share for the filestream data. I have tried browsing to it as well as trying to open specific files and I get errors both ways. I am running SQL Server 2008 enterprise on a Windows 7 workstation running on the domain. I've tried running the sql server service as a local user, then as network admin. The user I am logged in as is a local admin and a sysadmin in SQL server.

    Read the article

  • what permissions are granted to a sql server database owner?

    - by Charles Hepner
    I have been trying to find out what permissions are granted to the owner of a database in SQL Server 2005 or higher. I have seen best practices questions like this one: What is the best practice for the database owner in SQL Server 2005? but I haven't been able to find anything specifically addressing what the purpose of having a database owner in SQL Server is and what permissions are granted as a result of making a given login a database owner. If the owner of the database is disabled, what would stop working?

    Read the article

  • Moving SQL Server databases from one drive to another?

    - by Michael Stum
    I have a SQL Server 2008 R2 on my machine which stores everything on the C: drive (C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA). I got an additional Hard Drive now and would like to move all the databases over. It's 26 databases, so I'd like to avoid manually disconnecting/reconnecting them. Ideally I would just like to move them from C: and D: and tell SQL Server to look there. Downtime is not an issue, I just don't want to do dozens of mouse clicks :)

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Workarounds for supporting MVVM in the Silverlight TreeView Control

    - by cibrax
    MVVM (Model-View-ViewModel) is the pattern that you will typically choose for building testable user interfaces either in WPF or Silverlight. This pattern basically relies on the data binding support in those two technologies for mapping an existing model class (the view model) to the different parts of the UI or view. Unfortunately, MVVM was not threated as first citizen for some of controls released out of the box in the Silverlight runtime or the Silverlight toolkit. That means that using data binding for implementing MVVM is not always something trivial and usually requires some customization in the existing controls. In ran into different problems myself trying to fully support data binding in controls like the tree view or the context menu or things like drag & drop.  For that reason, I decided to write this post to show how the tree view control or the tree view items can be customized to support data binding in many of its properties. In first place, you will typically use a tree view for showing hierarchical data so the view model somehow must reflect that hierarchy. An easy way to implement hierarchy in a model is to use a base item element like this one, public abstract class TreeItemModel { public abstract IEnumerable<TreeItemModel> Children; } You can later derive your concrete model classes from that base class. For example, public class CustomerModel { public string FullName { get; set; } public string Address { get; set; } public IEnumerable<OrderModel> Orders { get; set; } }   public class CustomerTreeItemModel : TreeItemModel { public CustomerTreeItemModel(CustomerModel customer) { }   public override IEnumerable<TreeItemModel> Children { get { // Return orders } } } The Children property in the CustomerTreeItem model implementation can return for instance an ObservableCollection<TreeItemModel> with the orders, so the tree view will automatically subscribe to all the changes in the collection. You can bind this model to the tree view control in the UI by using a Hierarchical data template. <e:TreeView x:Name="TreeView" ItemsSource="{Binding Customers}"> <e:TreeView.ItemTemplate> <sdk:HierarchicalDataTemplate ItemsSource="{Binding Children}"> <!-- TEMPLATE --> </sdk:HierarchicalDataTemplate> </e:TreeView.ItemTemplate> </e:TreeView> An interesting behavior with the Children property and the Hierarchical data template is that the Children property is only invoked before the expansion, so you can use lazy load at this point (The tree view control will not expand the whole tree in the first expansion). The problem with using MVVM in this control is that you can not bind properties in model with specific properties of the TreeView item such as IsSelected or IsExpanded. Here is where you need to customize the existing tree view control to support data binding in tree items. public class CustomTreeView : TreeView { public CustomTreeView() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } }   public class CustomTreeViewItem : TreeViewItem { public CustomTreeViewItem() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } } You basically need to derive the TreeView and TreeViewItem controls to manually add a binding for the properties you need. In the example above, I am adding a binding for the “IsExpanded” and “IsSelected” properties in the items. The model for the tree items now needs to be extended to support those properties as well, public abstract class TreeItemModel : INotifyPropertyChanged { bool isExpanded = false; bool isSelected = false;   public abstract IEnumerable<TreeItemModel> Children { get; }   public bool IsExpanded { get { return isExpanded; } set { isExpanded = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsExpanded")); } }   public bool IsSelected { get { return isSelected; } set { isSelected = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsSelected")); } }   public event PropertyChangedEventHandler PropertyChanged; } However, as soon as you use this custom tree view control, you lose all the automatic styles from the built-in toolkit themes because they are tied to the control type (TreeView in this case).  The only ugly workaround I found so far for this problem is to copy the styles from the Toolkit source code and reuse them in the application.

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • SQL error - Cannot convert nvarchar to decimal

    - by jakesankey
    I have a C# application that simply parses all of the txt documents within a given network directory and imports the data to a SQL server db. Everything was cruising along just fine until about the 1800th file when it happend to have a few blanks in columns that are called out as DBType.Decimal (and the value is usually zero in the files, not blank). So I got this error, "cannot convert nvarchar to decimal". I am wondering how I could tell the app to simply skip the lines that have this issue?? Perhaps I could even just change the column type to varchar even tho values are numbers (what problems could this create?) Thanks for any help! using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); string RNumE = Convert.ToString(RNumberE); string RNumD = Convert.ToString(RNumberD); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { if (i = "" || i = null) continue; SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • ASP.NET, Web API and ASP.NET Web Pages Open Sourced

    - by The Official Microsoft IIS Site
    More Open Source goodness from Microsoft today, with the announcement that we are open sourcing ASP.NET MVC 4, ASP.NET Web API, ASP.NET Web Pages v2 (Razor) - all with contributions - under the Apache 2.0 license. You can find the source on CodePlex , and all the details on Scott Guthrie's blog . “We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products...(read more)

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • VB like with keywords in C#

    - by Tanzim Saqib
    AspectF is an open source utility which offers separation of concerns in fluent way. I am personally a big fan as well as contributor of this project. It is very simple, easy to implement, and an excellent way to incorporate regular everyday logics into your business code from one single class, AspectF. I have added couple of new features to it, which are yet to be committed to the source control. However, here’s one feature that I have introduced today is to be able to write VB-like with keyword...(read more)

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • What is Mozilla's new release management strategy ?

    - by RonK
    I saw today that FireFox released a new version (5). I tried reading about what was added and ran into this link: http://arstechnica.com/open-source/news/2011/06/firefox-5-released-arrives-only-three-months-after-firefox-4.ars It states that: Mozilla has launched Firefox 5, a new version of the popular open source Web browser. This is the first update that Mozilla has issued since adopting a new release management strategy that has drastically shortened the Firefox development cycle. I find this very intriguing - any idea what this new strategy is?

    Read the article

  • Understanding the Microsoft Public License (MS-PL)

    - by J.r. Hounddog
    I'm looking at using a few open source products in a commercial software application I'm working on. One of them is licensed under MIT, which I understand as allowing commercial software linking. However, the other open source product is licensed under MS-PL but I don't understand if that license is fully compatible with commercial software. So the question is, can I use MS-PL licensed OSS in a commercial/proprietary/for-sale application? Thanks.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >