Search Results

Search found 27339 results on 1094 pages for 'sql dmv'.

Page 526/1094 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Getting a null reference exception exporting a slightly complex rdlc report to excel

    - by George Handlin
    Have an issue in exporting a slightly complex report to excel from an rdlc. Exporting a simple tabular report works fine, but getting a null reference exception with a more complex one. As is usually the case, it works on the server in the development environment, but not in the test environment. The server in the development environment has iis, sql and sql reporting services on it and test only has iis (I didn't set them up). I'm guessing there is something that gets installed with SSRS that is not installed with just the report viewer. The report has several parameters going into it and a few generic lists going in for datasources. Some tables for display as well as a list. Working with the 2005 version. Exception follows: [NullReferenceException: Object reference not set to an instance of an object.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageLayout(PageLayout pageLayout, Int32& currentPageNumber, Stack& stack) +91 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageCollection(PageCollection pageCollection, Int32& currentPageNumber, Stack& stack, PageLayout& lastPageLayout) +607 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateWorkSheets() +150 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateMainSheet() +358 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderExcelWorkBook(CreateAndRegisterStream createAndRegisterStream) +158 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.ProcessReport(CreateAndRegisterStream createAndRegisterStream) +385 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +230 [ReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +296 Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1124 [WrapperReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1681 Microsoft.Reporting.LocalService.RenderWithDataCache(PreviewItemContext itemContext, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, IRenderingExtension renderer, ReportProcessing repProc, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup) +1693 Microsoft.Reporting.LocalService.Render(PreviewItemContext itemContext, Boolean allowInternalRenderers, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup, ProcessingMessageList& warnings) +227 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +235 [LocalProcessingException: An error occurred during local report processing.] Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +276 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +189 Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +28 Microsoft.Reporting.WebForms.LocalReportControlSource.RenderReport(String format, String deviceInfo, NameValueCollection additionalParams, String& mimeType, String& fileNameExtension) +52 Microsoft.Reporting.WebForms.ExportOperation.PerformOperation(NameValueCollection urlQuery, HttpResponse response) +153 Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context) +202 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +303 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64

    Read the article

  • VB.Net: exception on ExecuteReader() using OleDbCommand and Access

    - by Shane Fagan
    Hi again all, im getting the error below for this SQL statement in VB.Net 'Fill in the datagrid with the info needed from the accdb file 'to make it simple to access the db connstring = "Provider=Microsoft.ACE.OLEDB.12.0;Data " connstring += "Source=" & Application.StartupPath & "\AuctioneerSystem.accdb" 'make the new connection conn = New System.Data.OleDb.OleDbConnection(connstring) 'the sql command SQLString = "SELECT AllPropertyDetails.PropertyID, Street, Town, County, Acres, Quotas, ResidenceDetails, Status, HighestBid, AskingPrice FROM AllPropertyDetails " SQLString += "INNER JOIN Land ON AllPropertyDetails.PropertyID = Land.PropertyID " SQLString += "WHERE Deleted = False " If PriceRadioButton.Checked = True Then SQLString += "ORDER BY AskingPrice ASC" ElseIf AcresRadioButton.Checked = True Then SQLString += "ORDER BY Acres ASC" End If 'try to open the connection conn.Open() 'if the connection is open If ConnectionState.Open.ToString = "Open" Then 'use the sqlstring and conn to create the command cmd = New System.Data.OleDb.OleDbCommand(SQLString, conn) 'read the db and put it into dr dr = cmd.ExecuteReader If dr.HasRows Then 'if there is rows in the db then make sure the list box is clear 'clear the rows and columns if there is rows in the data grid LandDataGridView.Rows.Clear() LandDataGridView.Columns.Clear() 'add the columns LandDataGridView.Columns.Add("PropertyNumber", "Property Number") LandDataGridView.Columns.Add("Address", "Address") LandDataGridView.Columns.Add("Acres", "No. of Acres") LandDataGridView.Columns.Add("Quotas", "Quotas") LandDataGridView.Columns.Add("Details", "Residence Details") LandDataGridView.Columns.Add("Status", "Status") LandDataGridView.Columns.Add("HighestBid", "Highest Bid") LandDataGridView.Columns.Add("Price", "Asking Price") While dr.Read 'output the fields into the data grid LandDataGridView.Rows.Add( _ dr.Item("PropertyID").ToString _ , dr.Item("Street").ToString & " " & dr.Item("Town").ToString & ", " & dr.Item("County").ToString _ , dr.Item("Acres").ToString _ , dr.Item("Quota").ToString _ , dr.Item("ResidenceDetails").ToString _ , dr.Item("Status").ToString _ , dr.Item("HighestBid").ToString _ , dr.Item("AskingPrice").ToString) End While End If 'close the data reader dr.Close() End If 'close the connection conn.Close() Any ideas why its not working? The fields in the DB and the table names seem ok but its not working :/ The tables are AllPropertyDetails ProperyID:Number Street: text Town: text County: text Status: text HighestBid: Currency AskingPrice: Currency Land PropertyID: Number Acres: Number Quotas: Text ResidenceDetails: text System.InvalidOperationException was unhandled Message="An error occurred creating the form. See Exception.InnerException for details. The error is: No value given for one or more required parameters." Source="AuctioneerProject" StackTrace: at AuctioneerProject.My.MyProject.MyForms.Create__Instance__[T](T Instance) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 190 at AuctioneerProject.My.MyProject.MyForms.get_LandReport() at AuctioneerProject.ReportsMenu.LandButton_Click(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\ReportsMenu.vb:line 4 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at AuctioneerProject.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 81 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.OleDb.OleDbException ErrorCode=-2147217904 Message="No value given for one or more required parameters." Source="Microsoft Office Access Database Engine" StackTrace: at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at AuctioneerProject.LandReport.load_Land() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 37 at AuctioneerProject.LandReport.PriceRadioButton_CheckedChanged(Object sender, EventArgs e) in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 79 at System.Windows.Forms.RadioButton.OnCheckedChanged(EventArgs e) at System.Windows.Forms.RadioButton.set_Checked(Boolean value) at AuctioneerProject.LandReport.InitializeComponent() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.designer.vb:line 40 at AuctioneerProject.LandReport..ctor() in C:\Users\admin\Desktop\Auctioneers\AuctioneerProject\AuctioneerProject\LandReport.vb:line 5 InnerException:

    Read the article

  • If record exists in database, UPDATE a single column

    - by Doug
    I have a bulk uploading object in place that is being used to bulk upload roughly 25-40 image files at a time. Each image is about 100-150 kb in size. During the upload, I've created a for each loop that takes the file name of the image (minus the file extension) to write it into a column named "sku". Also, for each file being uploaded, the date is recorded to a column named DateUpdated, as well as some image path data. Here is my c# code: protected void graphicMultiFileButton_Click(object sender, EventArgs e) { //graphicMultiFile is the ID of the bulk uploading object ( provided by Dean Brettle: http://www.brettle.com/neatupload ) if (graphicMultiFile.Files.Length > 0) { foreach (UploadedFile file in graphicMultiFile.Files) { //strip ".jpg" from file name (will be assigned as SKU) string sku = file.FileName.Substring(0, file.FileName.Length - 4); //assign the directory where the images will be stored on the server string directoryPath = Server.MapPath("~/images/graphicsLib/" + file.FileName); //ensure that if image existes on server that it will get overwritten next time it's uploaded: file.MoveTo(directoryPath, MoveToOptions.Overwrite); //current sql that inserts a record to the db SqlCommand comm; SqlConnection conn; string connectionString = ConfigurationManager.ConnectionStrings["DataConnect"].ConnectionString; conn = new SqlConnection(connectionString); comm = new SqlCommand("INSERT INTO GraphicsLibrary (sku, imagePath, DateUpdated) VALUES (@sku, @imagePath, @DateUpdated)", conn); comm.Parameters.Add("@sku", System.Data.SqlDbType.VarChar, 50); comm.Parameters["@sku"].Value = sku; comm.Parameters.Add("@imagePath", System.Data.SqlDbType.VarChar, 300); comm.Parameters["@imagePath"].Value = "images/graphicsLib/" + file.FileName; comm.Parameters.Add("@DateUpdated", System.Data.SqlDbType.DateTime); comm.Parameters["@DateUpdated"].Value = DateTime.Now; conn.Open(); comm.ExecuteNonQuery(); conn.Close(); } } } After images are uploaded, managers will go back and re-upload images that have previously been uploaded. This is because these product images are always being revised and improved. For each new/improved image, the file name and extension will remain the same - so that when image 321-54321.jpg was first uploaded to the server, the new/improved version of that image will still have the image file name as 321-54321.jpg. I can't say for sure if the file sizes will remain in the 100-150KB range. I'll assume that the image file size will grow eventually. When images get uploaded (again), there of course will be an existing record in the database for that image. What is the best way to: Check the database for the existing record (stored procedure or SqlDataReader or create a DataSet ...?) Then if record exists, simply UPDATE that record so that the DateUpdated column gets today's date. If no record exists, do the INSERT of the record as normal. Things to consider: If the record exists, we'll let the actual image be uploaded. It will simply overwrite the existing image so that the new version gets displayed on the web. We're using SQL Server 2000 on hosted environment (DiscountAsp). I'm programming in C#. The uploading process will be used by about 2 managers a few times a month (each) - which to me is not a allot of usage. Although I'm a jr. developer, I'm guessing that a stored procedure would be the way to go. Just seems more efficient - to do this record check away from the for each loop... but not sure. I'd need extra help writing a sproc, since I don't have too much experience with them. Thank everyone...

    Read the article

  • java.sql.SQLException: Unsupported feature

    - by Raja Chandra Rangineni
    Hello All, I am using the JPA(hibernate) for the ORM and c3po for connection pooling. While I am able to do all the CRUD operations it gives me the below error while accessing the the data: Here are the tools: Hibernate 3.2.1, Oracle 10g, ojdbc14, connection pool: c3p0-0.9, stack trace: java.sql.SQLException: Unsupported feature at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134) at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179) at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269) at oracle.jdbc.dbaccess.DBError.throwUnsupportedFeatureSqlException(DBError.java:689) at oracle.jdbc.OracleDatabaseMetaData.supportsGetGeneratedKeys(OracleDatabaseMetaData.java:4180) at com.mchange.v2.c3p0.impl.NewProxyDatabaseMetaData.supportsGetGeneratedKeys(NewProxyDatabaseMetaData.java:3578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:91) at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2006) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1289) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:915) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:730) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:121) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:51) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:33) at com.bbn.dbservices.test.BillabilityPeriodsTest.getBillPeriods(BillabilityPeriodsTest.java:33) at com.bbn.dbservices.controller.ServiceController.generateReportsTest(ServiceController.java:355) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java:654) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:160) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:378) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:366) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:781) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:726) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:636) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:545) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619) java.sql.SQLException: Unsupported feature at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134) at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179) at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269) at oracle.jdbc.dbaccess.DBError.throwUnsupportedFeatureSqlException(DBError.java:689) at oracle.jdbc.OracleDatabaseMetaData.supportsGetGeneratedKeys(OracleDatabaseMetaData.java:4180) at com.mchange.v2.c3p0.impl.NewProxyDatabaseMetaData.supportsGetGeneratedKeys(NewProxyDatabaseMetaData.java:3578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:91) at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2006) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1289) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:915) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:730) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:121) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:51) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:33) at com.bbn.dbservices.test.BillabilityPeriodsTest.getBillPeriods(BillabilityPeriodsTest.java:33) at com.bbn.dbservices.controller.ServiceController.generateReportsTest(ServiceController.java:355) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java:654) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:160) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:378) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:366) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:781) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:726) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:636) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:545) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619) Any help with this is greatly appreciated. Thanks, Raja Chandra Rangineni. enter code here

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • how to insert data if it contain apostrophe ?

    - by angel ansari
    Actally my task is load csv file into sql server using c# so i have split it by comma my problem is that some field's data contain apostrop and i m firing insert query to load data into sql so its give error my coding like that using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Data.SqlClient; namespace tool { public partial class Form1 : Form { StreamReader reader; SqlConnection con; SqlCommand cmd; int count = 0; //int id=0; FileStream fs; string file = null; string file_path = null; SqlCommand sql_del = null; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { OpenFileDialog file1 = new OpenFileDialog(); file1.ShowDialog(); textBox1.Text = file1.FileName.ToString(); file = Path.GetFileName(textBox1.Text); file_path = textBox1.Text; fs = new FileStream(file_path, FileMode.Open, FileAccess.Read); } private void button2_Click(object sender, EventArgs e) { if (file != null ) { sql_del = new SqlCommand("Delete From credit_debit1", con); sql_del.ExecuteNonQuery(); reader = new StreamReader(file_path); string line_content = null; string[] items = new string[] { }; while ((line_content = reader.ReadLine()) != null) { if (count >=4680) { items = line_content.Split(','); string region = items[0].Trim('"'); string station = items[1].Trim('"'); string ponumber = items[2].Trim('"'); string invoicenumber = items[3].Trim('"'); string invoicetype = items[4].Trim('"'); string filern = items[5].Trim('"'); string client = items[6].Trim('"'); string origin = items[7].Trim('"'); string destination = items[8].Trim('"'); string agingdate = items[9].Trim('"'); string activitydate = items[10].Trim('"'); if ((invoicenumber == "-") || (string.IsNullOrEmpty(invoicenumber))) { invoicenumber = "null"; } else { invoicenumber = "'" + invoicenumber + "'"; } if ((destination == "-") || (string.IsNullOrEmpty(destination))) { destination = "null"; } else { destination = "'" + destination + "'"; } string vendornumber = items[11].Trim('"'); string vendorname = items[12].Trim('"'); string vendorsite = items[13].Trim('"'); string vendorref = items[14].Trim('"'); string subaccount = items[15].Trim('"'); string osdaye = items[16].Trim('"'); string osaa = items[17].Trim('"'); string osda = items[18].Trim('"'); string our = items[19].Trim('"'); string squery = "INSERT INTO credit_debit1" + "([id],[Region],[Station],[PONumber],[InvoiceNumber],[InvoiceType],[FileRefNumber],[Client],[Origin],[Destination], " + "[AgingDate],[ActivityDate],[VendorNumber],[VendorName],[VendorSite],[VendorRef],[SubAccount],[OSDay],[OSAdvAmt],[OSDisbAmt], " + "[OverUnderRecovery] ) " + "VALUES " + "('" + count + "','" + region + "','" + station + "','" + ponumber + "'," + invoicenumber + ",'" + invoicetype + "','" + filern + "','" + client + "','" + origin + "'," + destination + "," + "'" + (string)agingdate.ToString() + "','" + (string)activitydate.ToString() + "','" + vendornumber + "',' " + vendorname + "',' " + vendorsite + "',' " + vendorref + "'," + "'" + subaccount + "','" + osdaye + "','" + osaa + "','" + osda + "','" + our + "') "; cmd = new SqlCommand(squery, con); cmd.CommandTimeout = 1500; cmd.ExecuteNonQuery(); } label2.Text = count.ToString(); Application.DoEvents(); count++; } MessageBox.Show("Process completed"); } else { MessageBox.Show("path select"); } } private void button3_Click(object sender, EventArgs e) { this.Close(); } private void Form1_Load(object sender, EventArgs e) { con = new SqlConnection("Data Source=192.168.50.200;User ID=EGL_TEST;Password=TEST;Initial Catalog=EGL_TEST;"); con.Open(); } } } vendername field contain data (MCCOLLISTER'S TRANSPORTATION) so how to pass this data

    Read the article

  • how to insert data if it contain apostrop ?

    - by angel ansari
    Actally my task is load csv file into sql server using c# so i have split it by comma my problem is that some field's data contain apostrop and i m firing insert query to load data into sql so its give error my coding like that using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Data.SqlClient; namespace tool { public partial class Form1 : Form { StreamReader reader; SqlConnection con; SqlCommand cmd; int count = 0; //int id=0; FileStream fs; string file = null; string file_path = null; SqlCommand sql_del = null; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { OpenFileDialog file1 = new OpenFileDialog(); file1.ShowDialog(); textBox1.Text = file1.FileName.ToString(); file = Path.GetFileName(textBox1.Text); file_path = textBox1.Text; fs = new FileStream(file_path, FileMode.Open, FileAccess.Read); } private void button2_Click(object sender, EventArgs e) { if (file != null ) { sql_del = new SqlCommand("Delete From credit_debit1", con); sql_del.ExecuteNonQuery(); reader = new StreamReader(file_path); string line_content = null; string[] items = new string[] { }; while ((line_content = reader.ReadLine()) != null) { if (count >=4680) { items = line_content.Split(','); string region = items[0].Trim('"'); string station = items[1].Trim('"'); string ponumber = items[2].Trim('"'); string invoicenumber = items[3].Trim('"'); string invoicetype = items[4].Trim('"'); string filern = items[5].Trim('"'); string client = items[6].Trim('"'); string origin = items[7].Trim('"'); string destination = items[8].Trim('"'); string agingdate = items[9].Trim('"'); string activitydate = items[10].Trim('"'); if ((invoicenumber == "-") || (string.IsNullOrEmpty(invoicenumber))) { invoicenumber = "null"; } else { invoicenumber = "'" + invoicenumber + "'"; } if ((destination == "-") || (string.IsNullOrEmpty(destination))) { destination = "null"; } else { destination = "'" + destination + "'"; } string vendornumber = items[11].Trim('"'); string vendorname = items[12].Trim('"'); string vendorsite = items[13].Trim('"'); string vendorref = items[14].Trim('"'); string subaccount = items[15].Trim('"'); string osdaye = items[16].Trim('"'); string osaa = items[17].Trim('"'); string osda = items[18].Trim('"'); string our = items[19].Trim('"'); string squery = "INSERT INTO credit_debit1" + "([id],[Region],[Station],[PONumber],[InvoiceNumber],[InvoiceType],[FileRefNumber],[Client],[Origin],[Destination], " + "[AgingDate],[ActivityDate],[VendorNumber],[VendorName],[VendorSite],[VendorRef],[SubAccount],[OSDay],[OSAdvAmt],[OSDisbAmt], " + "[OverUnderRecovery] ) " + "VALUES " + "('" + count + "','" + region + "','" + station + "','" + ponumber + "'," + invoicenumber + ",'" + invoicetype + "','" + filern + "','" + client + "','" + origin + "'," + destination + "," + "'" + (string)agingdate.ToString() + "','" + (string)activitydate.ToString() + "','" + vendornumber + "',' " + vendorname + "',' " + vendorsite + "',' " + vendorref + "'," + "'" + subaccount + "','" + osdaye + "','" + osaa + "','" + osda + "','" + our + "') "; cmd = new SqlCommand(squery, con); cmd.CommandTimeout = 1500; cmd.ExecuteNonQuery(); } label2.Text = count.ToString(); Application.DoEvents(); count++; } MessageBox.Show("Process completed"); } else { MessageBox.Show("path select"); } } private void button3_Click(object sender, EventArgs e) { this.Close(); } private void Form1_Load(object sender, EventArgs e) { con = new SqlConnection("Data Source=192.168.50.200;User ID=EGL_TEST;Password=TEST;Initial Catalog=EGL_TEST;"); con.Open(); } } } vendername field contain data (MCCOLLISTER'S TRANSPORTATION) so how to pass this data

    Read the article

  • syntax error in sql query?

    - by user1550588
    I want to create some tables in database so for that i wright some queries to create table. but i got an error "there was a syntax error in your sql statement". I check all my queries but i am not able to find what syntax i wright wrong. here i attached my sql queries please told me what is wrong with queries. Thanks in advance. CREATE TABLE "WATER&SEWAGE" { "CUSTOMER_ID" VARCHAR, "POTABLE_WATER" VARCHAR, "HOT_WATER" VARCHAR, "SAFETY" VARCHAR, "T/P_VALVE+EXT" VARCHAR, "HEATER_TYPE" VARCHAR, "GAS_VENT_CONDITION" VARCHAR, "FIRE_BOX_CONDITION" VARCHAR, "PRESSURE" VARCHAR, "BACK_FLOW" VARCHAR, "PLUMBING_CONDITION" VARCHAR, "CABINET_CONDITION" VARCHAR, "3_COMP_SINK" VARCHAR, "WATER_HEATER" VARCHAR, "CLEAN_OUT_CONDITION" VARCHAR, "FLOOR_DRAIN" VARCHAR, "FLOOR_SINK" VARCHAR, "MAINTANANCE" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "TIME_AND_TETMPERATURE_REL" ("CUSTOMER_ID" INTEGER, "THERMOMTE_AVAILABLE" VARCHAR, "THERMO_TYPE" VARCHAR, "COLD_TEMP" VARCHAR, "FOOD_TYPE1" VARCHAR, "PROPER_COOLING" VARCHAR, "FOOD_TYPE2" VARCHAR, "COMPLIANCE" VARCHAR, "FOOD_TYPE3" VARCHAR, "TPHC" VARCHAR, "HACCP" VARCHAR, "HOT_HOLDING_TEMP" VARCHAR, "FOOD_TYPE4" VARCHAR, "HOT_TEMP" VARCHAR, "FOOD_TYPE5" VARCHAR, "IMPROPER_REHEATING" VARCHAR, "FOOD_TYPE6" VARCHAR, "FOOD_OUT_OF_TEMP" VARCHAR, "WRITTEN_PLAN_FILES" VARCHAR, "EMPLOYEE_KNOWLEDGE" VARCHAR, "TIME_KEEPING_METHODS" VARCHAR, "HACCP_METHODS" VARCHAR, "CALIBRATION" VARCHAR, "COOKING_TEMP" VARCHAR, "EQUIP_CONDITION" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PROTECTION_FROM_CONTAMINATION" ("FOOD_ADULTERATION" VARCHAR, "FOOD_SPOILAGE" VARCHAR, "CONTAMINATION" VARCHAR, "FOOD_STORAGE" VARCHAR, "REFRIGRATORS" VARCHAR, "FREEZER" VARCHAR, "FIFO" VARCHAR, "DRY" VARCHAR, "DISHWASHING_CHEMICALS" VARCHAR, "CLEANING" VARCHAR, "SANITIZER" VARCHAR, "SUPPLY_EQUIPMENT" VARCHAR, "PESTICIDES" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR, "CUSTOMER_ID" INTEGER) CREATE TABLE "PHYSICAL_FACILITIES_2" ("CUSTOMER_ID" VARCHAR, "CEILING" VARCHAR, "WINDOWS" VARCHAR, "SPOILS_STORAGE" VARCHAR, "DOORS" VARCHAR, "PERSONAL_LOCKERS" VARCHAR, "FLOORS" VARCHAR, "CONDITION" VARCHAR, "HANDICAP" VARCHAR, "SELF_CLOSING_ROOM" VARCHAR, "TOILETS" VARCHAR, "TOILET_PAPER" VARCHAR, "TOILET_CONDITION" VARCHAR, "VENTILATION_CONDITION" VARCHAR, "MENS" VARCHAR, "MEN'S_VENTILATION" VARCHAR, "MEN'S_FLOOR_DRAWN" VARCHAR, "MEN'S_TOILET_STALLS" VARCHAR, "MEN'S_HAND_SINK" VARCHAR, "MEN'S_TOWELS" VARCHAR, "WOMEN'S" VARCHAR, "WOMEN'S_VENTILATION" VARCHAR, "WOMEN'S_FLOOR_DRAWN" VARCHAR, "WOMEN'S_TOILET_STALLS" VARCHAR, "WOMEN'S_HAND_SINK" VARCHAR, "WOMEN'S_TOWELS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PHYSICAL_FACILITIES_1" ("CUSTOMER_ID" VARCHAR, "LIGHTNING" VARCHAR, "KITCHEN" VARCHAR, "STORAGE" VARCHAR, "REFRIGRATION" VARCHAR, "JANITORIAL" VARCHAR, "INTERIOR" VARCHAR, "FIXTURE" VARCHAR, "FOOTCANDLES_1" VARCHAR, "FOOTCANDLES_2" VARCHAR, "FOOTCANDLES_3" VARCHAR, "FOOTCANDLES_4" VARCHAR, "FOOTCANDLES_5" VARCHAR, "EXTERIOR" VARCHAR, "WINDOWS" VARCHAR, "GLASS" VARCHAR, "SCREEN" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PEST_CONTROL" ("CUSTOMER_ID" VARCHAR, "WALL/CEILING_PIPES" VARCHAR, "SANITATION" VARCHAR, "DOORS" VARCHAR, "SELF_CLOSING" VARCHAR, "RODENT_PROOF" VARCHAR, "VERMIN_PROOFING" VARCHAR, "FOUNDATION" VARCHAR, "ATTIC_VENTS" VARCHAR, "WINDOWS" VARCHAR, "TYPE_SCREENS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "GENERAL_FOOD_SAFETY_REQ" ("CUSTOMER_ID" VARCHAR, "APPROVED_METHODS" VARCHAR, "DEFROST" VARCHAR, "FROZEN_FOOD" VARCHAR, "FOOD_WASHING" VARCHAR, "PRODUCE_1" VARCHAR, "FRUITS" VARCHAR, "GRAINS" VARCHAR, "VEGETABLES" VARCHAR, "SEPERATE_FROM_CONT" VARCHAR, "RAW" VARCHAR, "PRODUCE_2" VARCHAR, "STORED_BULK" VARCHAR, "STORAGE" VARCHAR, "FIFO" VARCHAR, "REFRIGRATAION" VARCHAR, "STORAGE_TEMP" VARCHAR, "HUMIDITY" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_SAFETY_CERTIFICATION" ("CUSTOMER_ID" INTEGER,"OWNER" VARCHAR,"MANAGER" VARCHAR,"EMPLOYEE" VARCHAR,"NAME1" VARCHAR,"NAT'L_REGISTRY1" VARCHAR,"PROMETRIC1" VARCHAR,"SERVESAFE1" VARCHAR,"EXPIRATION1" VARCHAR,"NAME2" VARCHAR,"NAT'L_REGISTRY2" VARCHAR,"PROMETRIC2" VARCHAR,"SERVESAFE2" VARCHAR,"EXPIRATION2" VARCHAR,"COMMENTS" VARCHAR,"FOOD_TEMP" VARCHAR,"DISHWASHER" VARCHAR,"DANGER_ZONE" VARCHAR,"COOLING_FOODS" VARCHAR,"THAWING" VARCHAR,"REHEATING" VARCHAR,"THERMOMETERS" VARCHAR,"FIFO" VARCHAR,"HANDWASH" VARCHAR,"COOKING_TEMP" VARCHAR,"FOOD_POISONING_TYPES" VARCHAR,"EVEDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_FROM_APPROVED_SOURCES" ("CUSTOMER_ID" VARCHAR, "RECEIPTS" VARCHAR, "DELIVERY_DOOR" VARCHAR, "AIR_CURTAIN" VARCHAR, "VELOCITY" VARCHAR, "OFF_LOAD" VARCHAR, "INSPECTS" VARCHAR, "RODENTS" VARCHAR, "WARNING" VARCHAR, "SHELL_FISH" VARCHAR, "GULF_OYSTER" VARCHAR, "FOOD_OUT_OF_TEMP" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTO" VARCHAR) CREATE TABLE "FOOD_FACILITY_SITE_FACTS" ("CUSTOMER_ID" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL UNIQUE,"DBA" VARCHAR,"DISTRICT" VARCHAR,"CITY" VARCHAR,"ZIPCODE" VARCHAR,"ADDRESS" VARCHAR,"SEAT_NO" VARCHAR,"PHONE_NO" VARCHAR,"OWNER" VARCHAR,"PERSON_IN_CHARGE" VARCHAR,"WEBSITE" VARCHAR,"EMAIL" VARCHAR,"FACILITY_TYPE" VARCHAR,"SQ_FOOTAGE" VARCHAR,"NO_OF_SEATS" VARCHAR,"ALCOHOL_SALES" VARCHAR,"LTD_PREP" VARCHAR,"BUSINESS_LICENCE_NO" VARCHAR,"EXPIRATION" VARCHAR,"COMMENTS" VARCHAR,"EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_DISPLAY_SELF_SERVICE" ("CUSTOMER_ID" VARCHAR, "SELF_SERVICE" VARCHAR, "LIDS" VARCHAR, "UTENSILS" VARCHAR, "HOW_OFTEN_CHANGED" VARCHAR, "SCOOP" VARCHAR, "SNEEZ_GUARD" VARCHAR, "DISHES" VARCHAR, "CROSS" VARCHAR, "SENITATION" VARCHAR, "MAINTANANCE" VARCHAR, "MECHANICAL_CONDITION" VARCHAR, "SERVICE_OF_UTENSIL" VARCHAR, "TIME" VARCHAR, "PRPER_FOOD_LABELING" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "EQUIPMENT/UTENSILS/LINES" ("CUSTOMER_ID" VARCHAR, "STORED_DISHES" VARCHAR, "CUPS" VARCHAR, "DAMAGED_DISHES" VARCHAR, "UTENSILS" VARCHAR, "LINES" VARCHAR, "COOLING_EQUIPMENT" VARCHAR, "COOK_WARE_STORAGE" VARCHAR, "CONDITION" VARCHAR, "STORAGE_LOCATION" VARCHAR, "NAPKINS" VARCHAR, "SANITATION" VARCHAR, "ADEQUATE_CAPACITY" VARCHAR, "HAZARDS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "EMPLOYEE_HEALTH_AND_HYGENE" ("CSTOMER_ID" INTEGER, "COMUNICABLE_DIESEASE" VARCHAR, "WIPING_BAGS" VARCHAR, "DISCHARGE" VARCHAR, "LOCKERS" VARCHAR, "PROPER_HAND_WASH" VARCHAR, "RESTROOM_CONDITION" VARCHAR, "HANDWASH_STATION" VARCHAR, "HAIR_RESTRAINT" VARCHAR, "GLOVES_USED" VARCHAR, "WOUND_CARE" VARCHAR, "FOOD_SAMPLE_TESTING" VARCHAR, "GARMENT_CONDITION" VARCHAR, "HEALTH" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "DISH&WARE_WASHING" ("CUSTOMER_ID" INTEGER, "TYPE_OF_COMPARTMENT_SINK" VARCHAR, "PLUMBING_ISSUE1" VARCHAR, "QUATERNARY_AMMONIA" VARCHAR, "IODINE" VARCHAR, "HOT_WTR_SANIT_TEMP" VARCHAR, "PLUMBING_ISSUE2" VARCHAR, "HOT_WATER" VARCHAR, "TEMP" VARCHAR, "DISHWASHER_TYPE" VARCHAR, "WAQLL_CONDITION" VARCHAR, "SANITIZER_LEVELS" VARCHAR, "CHLORINE" VARCHAR, "SINK_OR_DISHWASHER" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "CORRECTIVE_ACTIONS" ("CUSTOMER_ID" VARCHAR, "CORRECTIVE_ACTIONS" VARCHAR) CREATE TABLE "CONSUMER_ADVISORY" ("CUSTOMER_ID" VARCHAR, "TRUTH_IN_MENU" VARCHAR, "MAJOR_CHAIN" VARCHAR, "TRANS_FAT" VARCHAR, "SCHOOL" VARCHAR, "PROHIBITED" VARCHAR, "CALORIES" VARCHAR, "COMMENTS" VARCHAR, "EVEDINCE_PHOTOS" VARCHAR) CREATE TABLE "COMPLAINS&ENFORCEMENTS" ("CUSTOMER_ID" VARCHAR, "PLAN_SUBMITTED" VARCHAR, "APPROVEL" VARCHAR, "CERTIFICATE_OF_OCCU" VARCHAR, "PRIOR_INSPEK_REPORT" VARCHAR, "LAST_INSPECTION_DATE" VARCHAR, "PERMIT_STATUS" VARCHAR, "POSTED" VARCHAR, "EXPIRATION" VARCHAR, "PERMIT_DISPLAYED" VARCHAR, "INSPECTION_REPORT_POSTING" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR)

    Read the article

  • Jboss 5.1.0: java.lang.LinkageError: loader constraint violation for class javax.sql.DataSource

    - by lawrencexu
    have read quite a lot about this error, the reason is 1) more than one jar containing this class have been include into the classpath, 2)include the jar twice or more, but in my case, this class is jdk class, and I have search no this class have been found under my application. any clue would be very helpful. exception stack: 2012-10-31 14:09:58,319 WARN [org.jboss.detailed.classloader.ClassLoaderManager] (http-0.0.0.0-8080-1:) Unexpected error during load of:javax.sql.DataSource java.lang.LinkageError: loader constraint violation: loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) previously initiated loading for a different type with name "javax/sql/DataSource" at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at org.jboss.classloader.spi.base.BaseClassLoader.access$200(BaseClassLoader.java:67) at org.jboss.classloader.spi.base.BaseClassLoader$2.run(BaseClassLoader.java:633) at org.jboss.classloader.spi.base.BaseClassLoader$2.run(BaseClassLoader.java:592) at java.security.AccessController.doPrivileged(Native Method) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassLocally(BaseClassLoader.java:591) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassLocally(BaseClassLoader.java:568) at org.jboss.classloader.spi.base.BaseDelegateLoader.loadClass(BaseDelegateLoader.java:135) at org.jboss.classloader.spi.filter.FilteredDelegateLoader.loadClass(FilteredDelegateLoader.java:131) at org.jboss.classloader.spi.base.ClassLoadingTask$ThreadTask.run(ClassLoadingTask.java:455) at org.jboss.classloader.spi.base.ClassLoaderManager.nextTask(ClassLoaderManager.java:267) at org.jboss.classloader.spi.base.ClassLoaderManager.process(ClassLoaderManager.java:166) at org.jboss.classloader.spi.base.BaseClassLoaderDomain.loadClass(BaseClassLoaderDomain.java:287) at org.jboss.classloader.spi.base.BaseClassLoaderDomain.loadClass(BaseClassLoaderDomain.java:1163) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassFromDomain(BaseClassLoader.java:862) at org.jboss.classloader.spi.base.BaseClassLoader.doLoadClass(BaseClassLoader.java:502) at org.jboss.classloader.spi.base.BaseClassLoader.loadClass(BaseClassLoader.java:447) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.getBranchesByAgreementId(UmbrellaNettingFundsSearch.java:199) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.buildConditionOfPrinAndCpty(UmbrellaNettingFundsSearch.java:177) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.getFunds(UmbrellaNettingFundsSearch.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.directwebremoting.impl.ExecuteAjaxFilter.doFilter(ExecuteAjaxFilter.java:34) at org.directwebremoting.impl.DefaultRemoter$1.doFilter(DefaultRemoter.java:428) at org.directwebremoting.impl.DefaultRemoter.execute(DefaultRemoter.java:431) at org.directwebremoting.impl.DefaultRemoter.execute(DefaultRemoter.java:283) at org.directwebremoting.servlet.PlainCallHandler.handle(PlainCallHandler.java:52) at org.directwebremoting.servlet.UrlProcessor.handle(UrlProcessor.java:101) at org.directwebremoting.servlet.DwrServlet.doPost(DwrServlet.java:146) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.lombardrisk.webgui.filter.ValidRequestFilter.doFilter(ValidRequestFilter.java:41) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.internalProcess(ActiveRequestResponseCacheValve.java:74) at org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:47) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:599) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451) at java.lang.Thread.run(Thread.java:662)

    Read the article

  • javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp] error with java scheduler

    - by Morgan Azhari
    What I'm trying to do is to update my database after a period of time. So I'm using java scheduler and connection pooling. I don't know why but my code only working once. It will print: init success success javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp]. at org.apache.naming.NamingContext.lookup(NamingContext.java:820) at org.apache.naming.NamingContext.lookup(NamingContext.java:168) at org.apache.naming.SelectorContext.lookup(SelectorContext.java:158) at javax.naming.InitialContext.lookup(InitialContext.java:411) at test.Pool.main(Pool.java:25) ---> line 25 is Context envContext = (Context)initialContext.lookup("java:/comp/env"); I don't know why it only works once. I already test it if I didn't running it without java scheduler and it works fine. No error whatsoerver. Don't know why i get this error if I running it using scheduler. Hope someone can help me. My connection pooling code: public class Pool { public DataSource main() { try { InitialContext initialContext = new InitialContext(); Context envContext = (Context)initialContext.lookup("java:/comp/env"); DataSource datasource = new DataSource(); datasource = (DataSource)envContext.lookup("jdbc/test"); return datasource; } catch (Exception ex) { ex.printStackTrace(); } return null; } } my web.xml: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <listener> <listener-class> package.test.Pool</listener-class> </listener> <resource-ref> <description>DB Connection Pooling</description> <res-ref-name>jdbc/test</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Context.xml: <?xml version="1.0" encoding="UTF-8"?> <Context path="/project" reloadable="true"> <Resource auth="Container" defaultReadOnly="false" driverClassName="com.mysql.jdbc.Driver" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="0" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" jmxEnabled="true" logAbandoned="true" maxActive="300" maxIdle="50" maxWait="10000" minEvictableIdleTimeMillis="300000" minIdle="30" name="jdbc/test" password="test" removeAbandoned="true" removeAbandonedTimeout="60" testOnBorrow="true" testOnReturn="false" testWhileIdle="true" timeBetweenEvictionRunsMillis="30000" type="javax.sql.DataSource" url="jdbc:mysql://localhost:3306/database?noAccessToProcedureBodies=true" username="root" validationInterval="30000" validationQuery="SELECT 1"/> </Context> my java scheduler public class Scheduler extends HttpServlet{ public void init() throws ServletException { System.out.println("init success"); try{ Scheduling_test test = new Scheduling_test(); ScheduledExecutorService executor = Executors.newScheduledThreadPool(100); ScheduledFuture future = executor.scheduleWithFixedDelay(test, 1, 60 ,TimeUnit.SECONDS); }catch(Exception e){ e.printStackTrace(); } } } Schedule_test public class Scheduling_test extends Thread implements Runnable{ public void run(){ Updating updating = new Updating(); updating.run(); } } updating public class Updating{ public void run(){ ResultSet rs = null; PreparedStatement p = null; StringBuilder sb = new StringBuilder(); Pool pool = new Pool(); Connection con = null; DataSource datasource = null; try{ datasource = pool.main(); con=datasource.getConnection(); sb.append("SELECT * FROM database"); p = con.prepareStatement(sb.toString()); rs = p.executeQuery(); rs.close(); con.close(); p.close(); datasource.close(); System.out.println("success"); }catch (Exception e){ e.printStackTrace(); } }

    Read the article

  • MySQL query - if not exists - insert into - else - update

    - by user3180931
    I made a simple document generator by the form, this form saves everything to mysql database, It works great, but when someone type a the same 'nrumowy' it creates a new row in mysql, 'nrumowy' is unique, so when someone adds a form with the same 'nrumowy' I want to just update existing data in mysql, I have that code: $con=mysqli_connect("localhost","login","pass","database"); // Check connection if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } // escape variables for security $numerklienta = mysqli_real_escape_string($con, $_POST['numerklienta']); $name = mysqli_real_escape_string($con, $_POST['name']); $hours = mysqli_real_escape_string($con, $_POST['hours']); $date = mysqli_real_escape_string($con, $_POST['date']); $beginDate = mysqli_real_escape_string($con, $_POST['beginDate']); $nrdomu = mysqli_real_escape_string($con, $_POST['nrdomu']); $telefon = mysqli_real_escape_string($con, $_POST['telefon']); $fax = mysqli_real_escape_string($con, $_POST['fax']); $nip = mysqli_real_escape_string($con, $_POST['nip']); $email = mysqli_real_escape_string($con, $_POST['email']); $stronawww = mysqli_real_escape_string($con, $_POST['stronawww']); $branza = mysqli_real_escape_string($con, $_POST['branza']); $vatkodpocztowy = mysqli_real_escape_string($con, $_POST['vatkodpocztowy']); $vatmiejscowosc = mysqli_real_escape_string($con, $_POST['vatmiejscowosc']); $vatulica = mysqli_real_escape_string($con, $_POST['vatulica']); $vatnrdomu = mysqli_real_escape_string($con, $_POST['vatnrdomu']); $vatemail = mysqli_real_escape_string($con, $_POST['vatemail']); $vatosoba = mysqli_real_escape_string($con, $_POST['vatosoba']); $datapublikacji = mysqli_real_escape_string($con, $_POST['datapublikacji']); $rabat = mysqli_real_escape_string($con, $_POST['rabat']); $wartoscnetto = mysqli_real_escape_string($con, $_POST['wartoscnetto']); $typreklamy = mysqli_real_escape_string($con, $_POST['typreklamy']); $inne = mysqli_real_escape_string($con, $_POST['inne']); $inne2 = mysqli_real_escape_string($con, $_POST['inne2']); $inne3 = mysqli_real_escape_string($con, $_POST['inne3']); $zaliczka = mysqli_real_escape_string($con, $_POST['zaliczka']); $liczbarat1 = mysqli_real_escape_string($con, $_POST['liczbarat1']); $zaakceptowaneprzez = mysqli_real_escape_string($con, $_POST['zaakceptowaneprzez']); $telzam = mysqli_real_escape_string($con, $_POST['telzam']); $datapodpis = mysqli_real_escape_string($con, $_POST['datapodpis']); $nrumowy = mysqli_real_escape_string($con, $_POST['nrumowy']); $sql="IF NOT EXISTS ( SELECT * FROM zam WHERE nrumowy = '$nrumowy' ) THEN INSERT INTO zam (numerklienta, name, hours, date, beginDate, nrdomu, telefon, fax, nip, email, stronawww, branza, vatkodpocztowy, vatmiejscowosc, vatulica, vatnrdomu, vatemail, vatosoba, datapublikacji, rabat, wartoscnetto, typreklamy, inne, inne2, inne3, zaliczka, liczbarat1, zaakceptowaneprzez, telzam, datapodpis, nrumowy) VALUES ('$numerklienta', '$name', '$hours', '$date', '$beginDate', '$nrdomu', '$telefon', '$fax', '$nip', '$email', '$stronawww', '$branza', '$vatkodpocztowy', '$vatmiejscowosc', '$vatulica', '$vatnrdomu', '$vatemail', '$vatosoba', '$datapublikacji', '$rabat', '$wartoscnetto', '$typreklamy', '$inne', '$inne2', '$inne3', '$zaliczka', '$liczbarat1', '$zaakceptowaneprzez', '$telzam', '$datapodpis', '$nrumowy' ) ELSE UPDATE zam SET name = '$name', numerklienta = '$numerklienta', hours = '$hours', date = '$date', beginDate = '$beginDate', nrdomu = '$nrdomu', telefon = '$telefon', fax = '$fax', nip = '$nip', email = '$email', stronawww = '$stronawww', branza = '$branza', vatkodpocztowy = '$vatkodpocztowy', vatmiejscowosc = '$vatmiejscowosc', vatulica = '$vatulica', vatnrdomu = '$vatnrdomu', vatemail = '$vatemail', vatosoba = '$vatosoba', datapublikacji = '$datapublikacji', rabat = '$rabat', wartoscnetto = '$wartoscnetto', typreklamy = '$typreklamy', inne = '$inne', inne2 = '$inne2', inne3 = '$inne3', zaliczka = '$zaliczka', liczbarat1 = '$liczbarat1', zaakceptowaneprzez = '$zaakceptowaneprzez', telzam = '$telzam', datapodpis = '$datapodpis' WHERE nrumowy ='$nrumowy' END IF"; if (!mysqli_query($con,$sql)) { die('Error: ' . mysqli_error($con)); } mysqli_close($con); This query without " select..... " and "else update" just a 'insert into' works great, also when I change this 'insert into' to 'update' but I don't know how to make this variable if not exists - insert into - else update

    Read the article

  • Hibernate - strange order of native SQL parameters

    - by Xorty
    Hello, I am trying to use native MySQL's MD5 crypto func, so I defined custom insert in my mapping file. <hibernate-mapping package="tutorial"> <class name="com.xorty.mailclient.client.domain.User" table="user"> <id name="login" type="string" column="login"></id> <property name="password"> <column name="password" /> </property> <sql-insert>INSERT INTO user (login,password) VALUES ( ?, MD5(?) )</sql-insert> </class> </hibernate-mapping> Then I create User (pretty simple POJO with just 2 Strings - login and password) and try to persist it. session.beginTransaction(); // we have no such user in here yet User junitUser = (User) session.load(User.class, "junit_user"); assert (null == junitUser); // insert new user junitUser = new User(); junitUser.setLogin("junit_user"); junitUser.setPassword("junitpass"); session.save(junitUser); session.getTransaction().commit(); What actually happens? User is created, but with reversed parameters order. He has login "junitpass" and "junit_user" is MD5 encrypted and stored as password. What did I wrong? Thanks EDIT: ADDING POJO class package com.xorty.mailclient.client.domain; import java.io.Serializable; /** * POJO class representing user. * @author MisoV * @version 0.1 */ public class User implements Serializable { /** * Generated UID */ private static final long serialVersionUID = -969127095912324468L; private String login; private String password; /** * @return login */ public String getLogin() { return login; } /** * @return password */ public String getPassword() { return password; } /** * @param login the login to set */ public void setLogin(String login) { this.login = login; } /** * @param password the password to set */ public void setPassword(String password) { this.password = password; } /** * @see java.lang.Object#toString() * @return login */ @Override public String toString() { return login; } /** * Creates new User. * @param login User's login. * @param password User's password. */ public User(String login, String password) { setLogin(login); setPassword(password); } /** * Default constructor */ public User() { } /** * @return hashCode * @see java.lang.Object#hashCode() */ @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((null == login) ? 0 : login.hashCode()); result = prime * result + ((null == password) ? 0 : password.hashCode()); return result; } /** * @param obj Compared object * @return True, if objects are same. Else false. * @see java.lang.Object#equals(java.lang.Object) */ @Override public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null) { return false; } if (!(obj instanceof User)) { return false; } User other = (User) obj; if (login == null) { if (other.login != null) { return false; } } else if (!login.equals(other.login)) { return false; } if (password == null) { if (other.password != null) { return false; } } else if (!password.equals(other.password)) { return false; } return true; } }

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

  • Using the login Details via Application

    - by ramin ss
    I have a CURL(in C++) to send my user and pass to remauth.php file so i think i do something wrong on remuth.php ( because i am basic in php and my program can not run because the auth not passed.) I use login via Application. my CURL: bool Auth_PerformSessionLogin(const char* username, const char* password) { curl_global_init(CURL_GLOBAL_ALL); CURL* curl = curl_easy_init(); if (curl) { char url[255]; _snprintf(url, sizeof(url), "http://%s/remauth.php", "SITEADDRESS.com"); char buf[8192] = {0}; char postBuf[8192]; _snprintf(postBuf, sizeof(postBuf), "%s&&%s", username, password); curl_easy_setopt(curl, CURLOPT_URL, url); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, AuthDataReceived); curl_easy_setopt(curl, CURLOPT_WRITEDATA, (void*)&buf); curl_easy_setopt(curl, CURLOPT_USERAGENT, "IW4M"); curl_easy_setopt(curl, CURLOPT_FAILONERROR, true); curl_easy_setopt(curl, CURLOPT_POST, 1); curl_easy_setopt(curl, CURLOPT_POSTFIELDS, postBuf); curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, -1); curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, false); CURLcode code = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_global_cleanup(); if (code == CURLE_OK) { return Auth_ParseResultBuffer(buf); } else { Auth_Error(va("Could not reach the SITEADDRESS.comt server. Error code from CURL: %x.", code)); } return false; } curl_global_cleanup(); return false; } and my remauth.php: <?php ob_start(); $host=""; // Host name $dbusername=""; // Mysql username $dbpassword=""; // Mysql password $db_name=""; // Database name $tbl_name=""; // Table name // Connect to server and select databse. mysql_connect("$host", "$dbusername", "$dbpassword") or die(mysql_error()); mysql_select_db("$db_name") or die(mysql_error()); // Define $username and $password //$username=$username; //$password=md5($_POST['password']); //$password=$password; $username=$_POST['username']; $password=$_POST['password']; //$post_item[]='action='.$_POST['submit']; // To protect MySQL injection (more detail about MySQL injection) $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM $tbl_name WHERE username='$username'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $username and $password, table row must be 1 row if($count==1){ $row = mysql_fetch_assoc($result); if (md5(md5($row['salt']).md5($password)) == $row['password']){ session_register("username"); session_register("password"); echo "#"; return true; } else { echo "o"; return false; } } else{ echo "o"; return false; } ob_end_flush(); ?> ///////////////////////////////////

    Read the article

  • « SQL Server peut depuis longtemps adresser les Workloads les plus critiques », entretien avec le chef de produit du SGBD

    SQL Server : « Nous sommes capables depuis longtemps d'adresser les Workloads les plus critiques » Retour avec le chef de produit France sur la version 2012 du SGBD Véritable pierre angulaire avec Windows Server 2012 du renouvellement de la gamme professionnelle de Microsoft, SQL Server 2012 est à un tournant à l'heure où l'IT et ses bases de données se transforment. Big Data, Cloud Computing, BI, données déstructurées : autant de défis (parmi tant d'autres) à relever pour les bases « traditionnelles » qui doivent également continuer à répondre aux besoins ? eux aussi ? traditionnels de leurs utilisateurs installés....

    Read the article

  • What will be correct answer to "why is NoSQL faster than SQL" on interview?

    - by Cynede
    It's just nonsense for me personally. I can't see any performance boost by using NoSQL instead of SQL. Maybe SQL over NoSQL, yes but not in that way. I think that if I answer "I have no idea" or something like that it will be bad answer because this doesn't really look like a fake-question, it's a serious question on interview but how should be looking correct answer on this question? Or am I missing something about NoSQL?

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Microsoft augmente l'interopérabilité entre ses ERP et son CRM et sort un assistant de migration des données d'Oracle vers SQL Server

    Microsoft augmente l'interopérabilité entre ses ERP et son CRM Et sort un assistant de migration des données d'Oracle vers SQL Server Mercredi dernier, Microsoft Dynamics a annoncé la mise à disposition mondiale de nouvelles ressources à destination des entreprises équipées de son progiciel de gestion intégré Microsoft Dynamics (ERP). Ces nouvelles ressources incluent d'une part un connecteur entre Microsoft Dynamics CRM et Microsoft Dynamics AX, ainsi que d'autre part un assistant de migration de données pour les clients Microsoft Dynamics AX qui souhaitent faire migrer leurs bases de données Oracle vers Microsoft SQL Server. Ce nouveau connecteur facilite l

    Read the article

  • Struggling with errors on POS_SUPPLIER_SEARCH_INDEX.sql when applying a Procurement Patch?

    - by LindaJ-Oracle
    Check out this new note (Doc ID 1677737.1)  - How to troubleshoot issues on script POS_SUPPLIER_SEARCH_INDEX.sql which helps you troubleshoot and resolve this common error on patch application: ORA-20000: possearchindex.sql(500): ORA-20000: Exception atPOS_SUPPLIER_SEARCH_INDEX_PKG.create_index(1800): ORA-20000: Exception atPOS_SUPPLIER_SEARCH_INDEX_PKG.create_index(1800): ORA-29855: error occurred inthe execution of ODCIINDEXCREATE routineORA-20000: Oracle Text error:DRG-10502: index POS_SUPPLIER_SEARCH_INDEX does not existDRG-11135: feature not generally available

    Read the article

  • Le moteur de recherche Google utilisé pour des injections SQL, un expert en sécurité présente un scénario d'attaque

    Le moteur de recherche Google utilisé pour des injections SQL un expert en sécurité présente un scénario d'attaque Les pirates ne manquent pas d'idées pour parvenir à leur fin. Les robots d'indexation du moteur Google auraient été exploités par certains pour effectuer des attaques par injection SQL.Qu'allez-vous faire si un robot d'indexation légitime de Google a été utilisé pour attaquer votre site ? Devrez-vous bloquer le bot (entraînant par la même occasion l'indexation de votre site), ou autoriser...

    Read the article

  • WCF "DataContext accessed after Dispose"

    - by David Ward
    I have an application with numerous WCF services that make use of LINQ-To-SQL as the data access model. I am having lots of problems with the "DataContext accessed after Dispose" exception. I understand what this exception is and that it is occurring because I have not "initialised" the data that is trying to be accessed. I've read many articles that suggest that I called ToList() on any arrays before the parent object is returned by the service. My issue is that I am getting this exception and I don't know where it is originating from and therefore I don't know what hasn't been initialised. Can anyone advise how best to identify the root cause? (I have used the MS Service Trace Viewer and this doesn't seem to give me any further information)

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >