Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 402/4078 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • sp_addlinkedserver procedure not linking Excel source

    - by brohjoe
    I am attempting to link an Excel source to a SQL Server DB on the Go Daddy website.  When I execute the sp in SQL Server, it shows it executed successfully, but no data is linked. This is the main part of my procedure:      EXEC sp_addlinkedserver @server = 'XLHybrid', @provider = 'Microsoft.ACE.OLEDB.12.0',    @srvproduct = 'Excel',    @provstr = 'Excel 12.0 Macro',    @datasrc = 'C:\Database\XLHybrid.xlsm' What's the problem here?

    Read the article

  • How do I keep connection in object explorer

    - by dotnet-practitioner
    This is regarding SQL server 2008 management studio.. I connect to different environment DB and every time I launch the Sql management console, I have to sign up every time to get those connections back in object explorer. Is there a way I could persist the connection so I don't have to login every time to different environments?

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • Replace with wildcard, in SQL

    - by Jay
    I know MS T-SQL does not support regular expression, but I need similar functionality. Here's what I'm trying to do: I have a varchar table field which stores a breadcrumb, like this: /ID1:Category1/ID2:Category2/ID3:Category3/ Each Category name is preceded by its Category ID, separated by a colon. I'd like to select and display these breadcrumbs but I want to remove the Category IDs and colons, like this: /Category1/Category2/Category3/ Everything between the leading slash (/) up to and including the colon (:) should be stripped out. I don't have the option of extracting the data, manipulating it externally, and re-inserting back into the table; so I'm trying to accomplish this in a SELECT statement. I also can't resort to using a cursor to loop through each row and clean each field with a nested loop, due to the number of rows returned in the SELECT. Can this be done? Thanks all - Jay

    Read the article

  • SSRS - Unable to determine if the owner of job has server access [SQLSTATE 42000] (Error 15404))

    - by John DaCosta
    SQL Server Reporting Services, in SSRS it seems like Schedules never fire, however a look at the SQL Agent reveals a permission issue related to not being able to resolve a user account. Seems SQL Agent does not rely on caching or whatever voodoo Windows magically works. link text Fix is listed here... edit -- Above is the fix I used to workaround this issue, has any one found any other work arounds or resolutions to this issue? It seems that by default the SSRS Generated Schedules are run as this phantom user account. How do I change this default? Is SSRS creating the jobs as the user the service runs as? Thanks Remus

    Read the article

  • Import Excel 2007 into SQL 2000 using Classic ASP and ADO

    - by jeff
    I have the following code from a legacy app which currently reads from an excel 2003 spreadsheet on a server, but I need this to run from my machine which uses excel 2007. When I debug on my machine ADO does not seem to be reading the spreadsheet. I have checked all file paths etc. and location of spreadsheet that is all fine. I've heard that you cannot use the jet db engine for excel 2007 anymore? Can someone confirm this? What do I need to do to get this to work? Please help! set obj_conn = Server.CreateObject("ADODB.Connection") obj_conn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=" & Application("str_folder") & "CNS43.xls;" & _ "Extended Properties=""Excel 8.0;""" set obj_rs_cns43 = Server.CreateObject("ADODB.RecordSet") obj_rs_cns43.ActiveConnection = obj_conn obj_rs_cns43.CursorType = 3 obj_rs_cns43.LockType = 2 obj_rs_cns43.Source = "SELECT * FROM [CNS43$]" obj_rs_cns43.Open

    Read the article

  • SQL Server Trouble with Rails

    - by Earlz
    Ok, I've been trying to get this to work like all day now and I'm barely any further from when I started. I'm trying to get Ruby On Rails to connect to SQL Server. I've installed unixODBC and configured it and FreeTDS and installed just about every Ruby gem relating to ODBC that exists. [earlz@earlzarch myproject]$ tsql -S AVP1 -U sa -P pass locale is "en_US.UTF-8" locale charset is "UTF-8" 1> quit [earlz@earlzarch myproject]$ isql AVP1 sa pass [ISQL]ERROR: Could not SQLConnect [earlz@earlzarch myproject]$ rake db:version (in /home/earlz/myproject) rake aborted! IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified (See full trace by running task with --trace) so, as you can see, tsql works, but not isql. What is the difference in the two that breaks it? /etc/odbc.ini [AVP1] Description = ODBC connection via FreeTDS Driver = TDS Servername = my.server UID = sa PWD = pass port = 1232 Database = mydatabase /etc/odbcinst.ini [TDS] Description = v0.6 with protocol v7.0 Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 (and yes, I've made sure that the .so files exist) the relevant part in freetds.conf [AVP1] host = my.server port = 1232 tds version = 8.0 and finally, my database.yml development: adapter: sqlserver mode: odbc dsn: AVP1 username: sa password: pass Can anyone please help me before I pull all my hair out? I am using a 64 bit Arch Linux that is completely up to date.

    Read the article

  • How to create SSIS package to update from one database to another database within same server

    - by Pavan Kumar
    My query is related to the answers i got for questions i had posted earlier in the same forum. I have a copy of a client database which is attached to SQL Server where the Central Database exists. The copy already contains the updated data. I just want to update from that copy to Central Database both holding same schema and are present in the same server using C# .NET. One of the solution i got was to create SSIS package and run it from the UI. I want to know as how i can create SSIS Package to achieve this. I am new to SSIS. I am using SQL Server 2005 Standard Edition and Visual Studio 2008 SP1 installed. I learnt that BIDS 2005 is used to create packages which comes by default with SQL Server 2005. Can someone please give me a example as i am new to this.

    Read the article

  • Subdomains for different applications on Windows Server 2008 R2 with Apache and IIS 7 installed

    - by Yusuf
    Hi, I have a home server, on which I have installed Apache, and several other applications that have a Web GUI (JDownloader, Free Download Manager). In order to access each of these apps (whether be it from the local network or the Internet), I have to enter a different port, e.g., http:// server:8085 or http:// xxxx.dyndns.org:8085 for Apache, http:// server:90 or http:// xxxx.dyndns.org:90 for FDM, http:// server:8081 or http:// xxxx.dyndns.org:8081 for JDownloader. I would like to be able to access them using sub-domains, e.g, http:// apache.server or http:// apache.xxxx.dyndns.org for Apache, http:// fdm.server or http:// fdm.xxxx.dyndns.org for FDM, http:// jdownloader.server or http:// jdownloader.xxxx.dyndns.org for JDownloader. First of all, would it be possible like I want it, i.e., both from LAN and Internet, and if yes, how? And then, even if it's possible only for Internet, I would like to know how to do it, if there's a way. Thanks in advance, Yusuf

    Read the article

  • How to refactor this database to allow sync. with smart clients?

    - by Anton Zvgny
    Howdy, I have inherited the following database: http://i46.tinypic.com/einvjr.png (EDIT: I cannot post images, so here goes the link) This currently runs 100% online thru an Asp.Net web app , but some employees will need to have offline access to this database to either get documents or to insert new documents for later synchronization when back to the office. What changes need to be made to this database scheme to support such scenario (sync smart clients with central web database)? Thanks! ps. I'm not an developer/dba, i'm just an mechanic engineer tasked with changing this app, so, take it easy on me :D ps1. We're using MSSQL Server for the online central database and MSSQL Compact for the smart client. ps2. the ImageGuid field of the images table is used to save the images to the file system. The web app takes the guid and create folders according to the first 3 initial letters to persist the image to the file system. ps3. The users of the smart client not always get server data first and then go modifying to sync later the changes. Most of the time they start working completely offline (with a blank local database) and later sync the data to the server. ps4. All the users of the smart client app have an account at the web app.

    Read the article

  • What does MSSQL execution plan show?

    - by tim
    There is the following code: declare @XmlData xml = '<Locations> <Location rid="1"/> </Locations>' declare @LocationList table (RID char(32)); insert into @LocationList(RID) select Location.RID.value('@rid','CHAR(32)') from @XmlData.nodes('/Locations/Location') Location(RID) insert into @LocationList(RID) select A2RID from tblCdbA2 Table tblCdbA2 has 172810 rows. I have executed the batch in SSMS with “Include Actual execution plan “ and having Profiler running. The plan shows that the first query cost is 88% relative to the batch and the second is 12%, but the profiler says that durations of the first and second query are 17ms and 210 ms respectively, the overall time is 229, which is not 12 and 88.. What is going on? Is there a way how I can determine in the execution plan which is the slowest part of the query?

    Read the article

  • Potential for SQL injection here?

    - by Matt Greer
    This may be a really dumb question but I figure why not... I am using RIA Services with Entity Framework as the back end. I have some places in my app where I accept user input and directly ask RIA Services (and in turn EF and in turn my database) questions using their data. Do any of these layers help prevent security issues or should I scrub my data myself? For example, whenever a new user registers with the app, I call this method: [Query] public IEnumerable<EmailVerificationResult> VerifyUserWithEmailToken(string token) { using (UserService userService = new UserService()) { // token came straight from the user, am I in trouble here passing it directly into // my DomainService, should I verify the data here (or in UserService)? User user = userService.GetUserByEmailVerificationToken(token); ... } } (and whether I should be rolling my own user verification system is another issue altogether, we are in the process of adopting MS's membership framework. I'm more interested in sql injection and RIA services in general)

    Read the article

  • MDB2 With SQL Express 2008

    - by Narcissus
    Hi all, So basically here's my problem. I'm looking for a solution to allow us to connect with SQL Express 2008, while still using MDB2 as our database abstraction layer. I need something like this, mainly because we still need to be able to use MySQL and Postgres (and ORMs seem to be not an option at this point in time). Preferably, there would be a solution that works for both PHP5.2 and PHP5.3. At first I went down the php_mysql extension road... it seems, though, as though that is not available in PHP 5.3. php_pdo_mssql doesn't seem to be usable with MDB2, so that seems to be out. Finally, there's the MS developed 'SQLSRV' extension, and while it seems as though there was work on a MDB2 'extension' for it at one point, it doesn't seem to have ever made it into the main branch. Please... does anyone have any solutions for me?

    Read the article

  • Daily/Weekly/Monthly Record Count Search via StoredProcedure

    - by user270960
    Using MS SQL Server.I have made a Stored Procedure naming "SP_Get_CallsLogged". I have a table named "TRN_Call", and it has one column named "CallTime" which is a DateTime. I have a web-page in my application where the User enters:- 1.StartDate (DateTime) 2.EndDate (DateTime) 3.Period(Daily/Weekly/Monthly) (varchar) (from DropDownList) I want to get the Record Count of those calls in my table *TRN_Call* on the basis of the specified Period(Daily/Weekly/Monthly) selected by user in the DropDownList. e.g. StartDate ='1/18/2010 11:10:46 AM' EndDate ='1/25/2010 01:10:46 AM' Period =Daily So the record count between these above mentioned dates(StartDate+EndDate) should come in a manner so that I can refer to those counts separately i.e. the following:- Date 1/18/2010 Records Found 5 Date 1/19/2010 Records Found 50 Date 1/20/2010 Records Found 15 Date 1/21/2010 Records Found 32 Date 1/22/2010 Records Found 12 Date 1/23/2010 Records Found 15 Date 1/24/2010 Records Found 17 Date 1/25/2010 Records Found 32 and send those Counts to the Web Application so that these counts could be listed then in the Crystal Reports.

    Read the article

  • AVG time spent on multiple rows SQL-server?

    - by seo20
    I have a table tblSequence with 3 cols in MS SQL: ID, IP, [Timestamp] Content could look like this: ID IP [Timestamp] -------------------------------------------------- 4347 62.107.95.103 2010-05-24 09:27:50.470 4346 62.107.95.103 2010-05-24 09:27:45.547 4345 62.107.95.103 2010-05-24 09:27:36.940 4344 62.107.95.103 2010-05-24 09:27:29.347 4343 62.107.95.103 2010-05-24 09:27:12.080 ID is unique, there can be n number of IP's. Would like to calculate the average time spent per IP. in a single row Know you can do something like this: SELECT CAST(AVG(CAST(MyTable.MyDateTimeFinish - MyTable.MyDateTimeStart AS float)) AS datetime) But how on earth do I find the first and last entry of my unique IP row so I can have a start and finish time? I'M stuck. Would like to calculate the average time spent per IP. in a single row

    Read the article

  • How to prevent Spell checking code in MS Office?

    - by Aaron
    We use MS Office. Outlook for emails, Word for some documentation and I use OneNote a lot for my own note taking. What bugs me is when I drop some code or use key words or even camel case into these apps the spell checking picks them up and I have red squiggles everywhere. Ignore is pretty much useless, so either I have to turn off Spell Check altogether start adding these to the custom dictionary. What would be good is if I can use the Set Language function to mark a whole block of text to just not be spell checked. Has anyone found a nice solution to this or do you know of a blank dictionary is best to use? I found using "Mohawk" kind of does that... might just use that for now. Maybe create a macro to switch between them.

    Read the article

  • Do you use Scimore SQL database ?

    - by Darian Miller
    There's a database engine that looks amazing for a free tool and that is Scimore. Have you had much experience with it? If so, how does it rate..particularly against Firebird? How resilient/self reliant is it? (Meaning how much downtime/maintenance is expected?) The scale out capabilities also look very interesting. I just downloaded it and have been playing around and so far it looks good. I had been looking for an easy to deploy single-user type embedded database (which Scimore has an option) and was toying with MS SQL Compact Edition and SQLite and remembered this database from a trial a few years ago. (Windows platform) I was about ready to settle in on SQLite but started thinking about other projects which are multi-user and wanted to stick with a single solution...which is why I started looking at Firebird as well.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Lightweight PHP server to bundle with application?

    - by munchybunch
    Does there exist some sort of PHP server that can be bundled with a locally-deployed application? It sounds wonky, but the end result is I can't use a remote web server to do anything. Clients will be downloading a package, and the plan is to use a Java backend that reads from a flat file. The flat file contains settings and is modified through a GUI written in HTML/JS, and this is where the server would come in. The forms in HTML should be able to submit to the server, which does a simple file write to the flat file. Is there any simple, lightweight server that has that simple feature? When running the executable for the application, it would start the installation process for the server before moving the web GUI files to the appropriate locations. Note that I'm doing this for a client, so I can't quite change the reqs and would rather not discuss their effectiveness. I would be ever thankful if people had suggestions for the server though!

    Read the article

  • Upsert - Efficient Update or Insert in VB.Net, SQL Server

    - by HK1
    I'm trying to understand how to streamline the process of inserting a record if none exists or updating a record if it already exists. I'm not using stored procedures, although maybe that would be the most efficient way of doing this. The actual scenario in which this is necessary is saving a user preference/setting to my SettingsUser table. In MS Access I would typically pull a DAO recordset looking for the specified setting. If the recordset comes back empty then I know I need to add a new record which I can do with the same recordset object. On the other hand, if it isn't empty, I can just update the setting's value right away. In theory, this is only two database operations. What is the recommended way of doing this in .NET?

    Read the article

  • SQL Insert multilingual characters

    - by Usman Akram
    I am trying to create a table in my MS SQL database for Languages. I want to store an English name of Language and a local name of language in the database. i.e. Language, Language(local) English, English German, Deutsch Italian, Italiano Japanese, ??? ... ... I have 279 languages that I want to import, but when I import it shows '?????' for some like japanese, Russian and arabic etc The database Collation is Latin1_General_CI_AS. I would also like advise on multilingual websites; if i have a database of product descriptions and I want to have translation in multiple languages, should I go for separate databases or Can I have translation in one databse? (I prefer not to duplicate data!). Anything else to make sure users are able to write comments in different languages (char encoding on web?) and can be stored in database.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >