Search Results

Search found 68200 results on 2728 pages for 'web database'.

Page 74/2728 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Looking for a central image database and tagging system for a group of users

    - by jstarek
    I'm doing IT support for a small volunteer organisation who needs to centrally store and organize around 2500 photos. Can anyone here recommend a database or similar system which matches the following criteria: Intuitive to use for users with little computer experience Multi-user support, ideally with integration in our existing LDAP user directory Should have a web-interface Not a hosted solution like Picasa (because we have a rather slow internet connection with very slow upload) Should allow tagging of images, sorting by various criteria and storing copyright information If there are native GNOME and/or Windows clients for the tool, that would be a great benefit. Many thanks in advance!

    Read the article

  • Expression web ftp: Stuck at "Listing subsites"

    - by FrankPython
    When I try to use the Expression Web 4 built-in ftp I see the message ""Listing subsites in.." and soon afterward "passive ftp not available". If I switch to active, I get "active ftp not available". There are no subsites. It is a simple directory with one html page. Backend is a normal IIS6 server. FTP to the same IP with other FTP clients works fine! Any idea if Expression web has some specific requirements? It is our own dedicated server. (Please no tips to use another tool, for this specific project Expression Web is a requirement).

    Read the article

  • Using mongodump with an auth enabled mongodb server

    - by bb-generation
    I'm trying to do a daily backup of my mongodb server (auth enabled) using the mongodump tool. mongodump provides two parameters to set the credentials: -u [ --username ] arg username -p [ --password ] arg password Unfortunately they don't provide any parameter to read the password from stdin. Therefore everytime I run this command, everyone on the server can read the password (e.g. by using ps aux). The only workaround I have found is stopping the database and directly accessing the database files using the --dbpath parameter. Is there any other solution which allows me to backup the mongodb database without stopping the server and without "publishing" my password? I am using Debian squeeze 6.0.5 amd64 with mongodb 1.4.4-3.

    Read the article

  • Installing Oracle11gr2 on redhat linux

    - by KItis
    I have basic question about installing applications on linux operating system. i am going to express my issue considering oracle db installation as a example. when installing oracle database , i created a user group called dba and and user in this group called ora112. so this users is allowed to install database. so my question is if ora112 uses umaks is set to 077, then no other uses will be able to configure oracle database. why do we need to follow this practice. is it a accepted procedure in application installation on Linux. please share your experience with me. thanks in advance for looking into this issue say i install java application on this way. then no other application which belongs to different user account won't be able use java running on this computer because of this access restriction.

    Read the article

  • Web service using Data Dynamics ActiveReports occasionally slows down

    - by Swoop
    My company is running into a problem with a web service that is written in C#/ASP.Net. The service receives an identity key for data in SQL Server and a path to generate and save a PDF report for this data. In most cases, this web service returns results to the calling web pages very quickly, usually within a few seconds max. However, it seems to occasionally hit a significant slowdown. The web application calling the web service will generate a timeout error when this slowdown occurs. We have checked and the PDF does get created and saved to the server, so it looks like the web service eventually finishes executing. It seems to take about 1 to 2 minutes for processing to have completed. The PDF is generated using ActiveReports from Data Dynamics. Wwhen this problem occurs, making a small change to the web service's config file (ie, adding a blank space to a connection string line) seems to restart the web service and everything is perfectly ok for a period of time afterwards. Other web applications that are running on the same web server do not seem to experience this type of behavior, only this particular web service. I have added the code for the web service below. It is basic calls to 3rd party libraries. We are not able to recreate this problem in test. I am wondering what might be causing this issue? [WebMethod] public string Publish(int identity, string transactionType, string directory, string filename) { try { AdpConnection Conn = new AdpConnection(ConfigurationManager.AppSettings["myDBConnString"]); AdpCommand Cmd = new AdpCommand("storedproc_GetData", oConn); AdpParameter Param; Cmd.CommandType = CommandType.StoredProcedure; Param = Cmd.CreateParameter("@Identity", DbType.Int32); Param.Value = identity; Cmd.Parameters.Add(oParam); Conn.Open(); string aResponse = Cmd.ExecuteScalar().ToString(); Conn.Close(); if (transactionType == "typeA") { //Parse response DataSet dsResponse = ParseDataResponse(aResponse); //dsResponse.WriteXml(@ConfigurationManager.AppSettings["DocsDir"] + identity.ToString() + ".xml"); DataDynamics.ActiveReports.ActiveReport3 rpt = new DataDynamics.ActiveReports.ActiveReport3(); rpt.LoadLayout(@ConfigurationManager.AppSettings["myReportPath"] + "TypeA.rpx"); rpt.AddNamedItem("ReportPath", @ConfigurationManager.AppSettings["myReportPath"]); rpt.AddNamedItem("XMLSTRING", FormatXML(dsResponse.GetXml())); DataDynamics.ActiveReports.DataSources.XMLDataSource xmlds = new DataDynamics.ActiveReports.DataSources.XMLDataSource(); xmlds.FileURL = null; xmlds.RecordsetPattern = "//DataPatternA"; xmlds.LoadXML(FormatXML(dsResponse.GetXml())); if (!System.IO.Directory.Exists(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\")) { System.IO.Directory.CreateDirectory(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\"); } string sXML = FormatXML(dsResponse.GetXml()); StreamWriter sw = new StreamWriter(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".xml", false); sw.Write(sXML); sw.Close(); rpt.DataSource = xmlds; rpt.Run(true); DataDynamics.ActiveReports.Export.Pdf.PdfExport xPdf = new DataDynamics.ActiveReports.Export.Pdf.PdfExport(); xPdf.Export(rpt.Document, @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"); } } catch(Exception ex) { return "Error: " + ex.ToString(); } return @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"; }

    Read the article

  • Web service occasionally slows down significantly

    - by Swoop
    My company is running into a problem with a web service that is written in C#/ASP.Net. The service receives an identity key for data in SQL Server and a path to generate and save a PDF report for this data. In most cases, this web service returns results to the calling web pages very quickly, usually within a few seconds max. However, it seems to occasionally hit a significant slowdown. The web application calling the web service will generate a timeout error when this slowdown occurs. We have checked and the PDF does get created and saved to the server, so it looks like the web service eventually finishes executing. It seems to take about 1 to 2 minutes for processing to have completed. The PDF is generated using ActiveReports from Data Dynamics. Wwhen this problem occurs, making a small change to the web service's config file (ie, adding a blank space to a connection string line) seems to restart the web service and everything is perfectly ok for a period of time afterwards. Other web applications that are running on the same web server do not seem to experience this type of behavior, only this particular web service. I have added the code for the web service below. It is basic calls to 3rd party libraries. We are not able to recreate this problem in test. I am wondering what might be causing this issue? [WebMethod] public string Publish(int identity, string transactionType, string directory, string filename) { try { AdpConnection Conn = new AdpConnection(ConfigurationManager.AppSettings["myDBConnString"]); AdpCommand Cmd = new AdpCommand("storedproc_GetData", oConn); AdpParameter Param; Cmd.CommandType = CommandType.StoredProcedure; Param = Cmd.CreateParameter("@Identity", DbType.Int32); Param.Value = identity; Cmd.Parameters.Add(oParam); Conn.Open(); string aResponse = Cmd.ExecuteScalar().ToString(); Conn.Close(); if (transactionType == "typeA") { //Parse response DataSet dsResponse = ParseDataResponse(aResponse); //dsResponse.WriteXml(@ConfigurationManager.AppSettings["DocsDir"] + identity.ToString() + ".xml"); DataDynamics.ActiveReports.ActiveReport3 rpt = new DataDynamics.ActiveReports.ActiveReport3(); rpt.LoadLayout(@ConfigurationManager.AppSettings["myReportPath"] + "TypeA.rpx"); rpt.AddNamedItem("ReportPath", @ConfigurationManager.AppSettings["myReportPath"]); rpt.AddNamedItem("XMLSTRING", FormatXML(dsResponse.GetXml())); DataDynamics.ActiveReports.DataSources.XMLDataSource xmlds = new DataDynamics.ActiveReports.DataSources.XMLDataSource(); xmlds.FileURL = null; xmlds.RecordsetPattern = "//DataPatternA"; xmlds.LoadXML(FormatXML(dsResponse.GetXml())); if (!System.IO.Directory.Exists(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\")) { System.IO.Directory.CreateDirectory(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\"); } string sXML = FormatXML(dsResponse.GetXml()); StreamWriter sw = new StreamWriter(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".xml", false); sw.Write(sXML); sw.Close(); rpt.DataSource = xmlds; rpt.Run(true); DataDynamics.ActiveReports.Export.Pdf.PdfExport xPdf = new DataDynamics.ActiveReports.Export.Pdf.PdfExport(); xPdf.Export(rpt.Document, @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"); } } catch(Exception ex) { return "Error: " + ex.ToString(); } return @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"; }

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Custom Database integration with MOSS 2007

    - by Bob
    Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.

    Read the article

  • IList<Item> Collection Class accessing database

    - by Mike
    Hi, I have a database with Users. Users have Items. These Items can change actively. How do you access the items in a collection type format? For the user, I fill all the user properties at the time of instantiation. If I load the user's items at the time of the instantiation, and the items change, they will have old data. I was thinking, maybe I need an ItemCollection class and have that a field/property apart of the user class, that way to traverse all the user's items I could use a foreach loop. So, my question is, what is the best practice/best way of accessing the items from a database using some sort of collection? On accessing the particular Item, it needs to get the latest database information, and when the user does do a foreach loop, the latest item information must be available. I.e. What I'm trying to do Console.WriteLine(User.Items[3].ID); returns 5. //this updates the item information and saves it to the database. User.Items[3].ID = 13; //Add a new item to the database. User.Items.Add(new Item { id = 17}); foreach (Item item in User.Items) { //this would traverse all items in the database. //not some cached copy at the time of instantiation of the user. }

    Read the article

  • Connecting to database on web host in Visual Studio

    - by Anders Svensson
    I have a web site developed locally with a local Sql Server database. I also have a web host that provides one Sql Server database for my site. Now I want to deploy the application, and I would like to be able to manage the remote database from the Server Explorer in Visual Studio. I have the connection string used in the application, which works fine for adding, say, a datasource to a control etc. But I don't know if there's any way to use it to connect the database inside the Server Explorer so that I can add tables etc. I have read that you're supposed to be able to this instead of using the Sql Server Management Studio, but I have'nt read anything about how to connect to the remote database in it. What I have tried so far is this: I have selected Add database in Server Explorer. This brings up first a dialog where I choose Sql Server. And then I get a dialog where I can set Server name (which I tried using the ip address in the connection string below), and Authentication (where I chose Sql Server Authentication, with the user id and password from below). But when I test the connection it fails. Here's the connection string, which works fine when used for datasources in the application (obviously with different user name and password): Any help appreciated!

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • connecting to secure database on private network from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. [edit] the webservice will also be available to the retail website in order for it to lookup product info as well as allocate stock transfers to the same sqlserver db. it will therefore be located on the same domain as the retail site The option of switching the database onto public hosting is not feasible, so I have to work with the scenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Modifying SQL Database on Shared Hosting

    - by apocalypse9
    I have a live database on a shared hosting server. I am making some major changes to my site's code and I would like to fix some stupid mistakes I made in initially designing the database. These changes involve altering the size of a large number of fields, and enforcing referential integrity between tables properly. I would like to make the changes on both my local test server and the remote server if possible. I should note that while I'm fairly comfortable with writing complex queries to handle data, I have very little experience modifying database structure without a graphical interface. I can access the remote database in the visual studio database explorer but I can not use that for anything other than data manipulation. I installed Sql Management Studio express last night and after 40+ crashes I gave up - I couldn't even patch the damn thing. The remote server is SQL 2005 / The MyLittleAdmin web interface is available. So my question is what is the best way to accomplish these changes. Is there a graphical interface I can use on the remote server? If not is there an easy way to copy the database to my local machine, fix it, and re upload? Finally if none of the above are viable does anyone have links to a decent info on fixing referential integrity via query? Sorry for the somewhat general question - I feel like I am making this far harder than it should be but after searching / trying all night i haven't gotten anywhere. Thanks in advance for the help. I really appreciate it. ...Also does anyone have a time machine I can borrow- I need to go kick my past self's ass for this.

    Read the article

  • Database choices

    - by flobadob
    I have a prickly design issue regarding the choice of database technologies to use for a group of new applications. The final suite of applications would have the following database requirements... Central databases (more than one database) using mysql (myst be mysql due to justhost.com). An application to be written which accesses the multiple mysql databases on the web host. This application will also write to local serverless database (sqlite/firebird/vistadb/whatever). Different flavors of this application will be created for windows (.NET), windows mobile, android if possible, iphone if possible. So, the design task is to minimise the quantity of code to achieve this. This is going to be tricky since the languages used are already c# / java (android) and objc (iphone). Not too worried about that, but can the work required to implement the various database access layers be minimised? The serverless database will hold similar data to the mysql server, so some kind of inheritance in the DAL would be useful. Looking at hibernate/nhibernate and there is linq to whatever. So many choices!

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • User defined datatypes CANNOT be returned in web service in Jboss 5.0.1

    - by user1503117
    I am using Jboss 5.0.1, jdk 1.6.0 update 31 and implementing an EJB as a web service and my method in web service module returns an Array of JavaBean objects in my example BenefitLevel array object. When executed in JBoss it throws the following exception: 08:57:08,552 ERROR [ServiceProxy] Service error javax.xml.rpc.ServiceException: Cannot create proxy at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.ws.core.jaxrpc.client.ServiceProxy.invoke(ServiceProxy.java:127) at $Proxy105.getCarrierWSSEIPort(Unknown Source) at org.apache.jsp.index_jsp._jspService(index_jsp.java:92) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:601) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.IllegalStateException: Cannot synchronize to any of these methods: public abstract stubs.BenefitLevel[] stubs.CarrierWSSEI.getActiveBenData() throws java.rmi.RemoteException OperationMetaData: qname={urn:CarrierWS/wsdl}getActiveBenData javaName=getActiveBenData style=rpc/literal oneWay=false soapAction= ReturnMetaData: xmlName=result partName=result xmlType={urn:CarrierWS/types/arrays/com/test/cas/carrier/plan/info}BenefitLevelArray javaType=com.benefitpartnersinc.cas.carrier.plan.info.BenefitLevel[] mode=OUT inHeader=false index=-1 at org.jboss.ws.metadata.umdm.OperationMetaData.eagerInitialize(OperationMetaData.java:491) at org.jboss.ws.metadata.umdm.EndpointMetaData.eagerInitializeOperations(EndpointMetaData.java:557) at org.jboss.ws.metadata.umdm.EndpointMetaData.initializeInternal(EndpointMetaData.java:541) at org.jboss.ws.metadata.umdm.EndpointMetaData.setServiceEndpointInterfaceName(EndpointMetaData.java:220) at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:345) ... 33 more 08:57:08,567 ERROR [STDERR] javax.xml.rpc.ServiceException: Cannot create proxy 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:359) 08:57:08,567 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 08:57:08,567 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 08:57:08,567 ERROR [STDERR] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 08:57:08,567 ERROR [STDERR] at java.lang.reflect.Method.invoke(Method.java:597) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceProxy.invoke(ServiceProxy.java:127) 08:57:08,567 ERROR [STDERR] at $Proxy105.getCarrierWSSEIPort(Unknown Source) 08:57:08,567 ERROR [STDERR] at org.apache.jsp.index_jsp._jspService(index_jsp.java:92) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) 08:57:08,567 ERROR [STDERR] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249) 08:57:08,567 ERROR [STDERR] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) 08:57:08,567 ERROR [STDERR] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) 08:57:08,567 ERROR [STDERR] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:601) 08:57:08,567 ERROR [STDERR] at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) 08:57:08,567 ERROR [STDERR] at java.lang.Thread.run(Thread.java:662) 08:57:08,567 ERROR [STDERR] Caused by: java.lang.IllegalStateException: Cannot synchronize to any of these methods: public abstract stubs.BenefitLevel[] stubs.CarrierWSSEI.getActiveBenData() throws java.rmi.RemoteException OperationMetaData: qname={urn:CarrierWS/wsdl}getActiveBenData javaName=getActiveBenData style=rpc/literal oneWay=false soapAction= ReturnMetaData: xmlName=result partName=result xmlType={urn:CarrierWS/types/arrays/com/test/cas/carrier/plan/info}BenefitLevelArray javaType=com.test.cas.carrier.plan.info.BenefitLevel[] mode=OUT inHeader=false index=-1 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.OperationMetaData.eagerInitialize(OperationMetaData.java:491) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.eagerInitializeOperations(EndpointMetaData.java:557) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.initializeInternal(EndpointMetaData.java:541) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.setServiceEndpointInterfaceName(EndpointMetaData.java:220) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:345) 08:57:08,567 ERROR [STDERR] ... 33 more My Web client code is as follows : <%@page import="java.util.Hashtable"%> <%@page import="javax.naming.*,com.q4.*,javax.xml.rpc.Stub,stubs.CarrierWS,stubs.CarrierWSSEI,stubs.CarrierWSSEI_Impl"%> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <h1>Hello World!</h1> <% try { InitialContext ic = new InitialContext( ); CarrierWS carrierws = (CarrierWS)ic.lookup("java:comp/env/service/CarrierWS"); out.println("========================" + carrierws); CarrierWSSEI sei = carrierws.getCarrierWSSEIPort(); out.println("Invoking the service please wait ............." + carrierws.getCarrierWSSEIPort()); ((Stub)sei)._setProperty(Stub.ENDPOINT_ADDRESS_PROPERTY,"http://localhost:8080/TestWS3WAR/CarrierWS"); out.println("Invoking the service please wait ............." + sei.getActiveBenData().length); } catch(Exception e) { out.println("Exception occurred : " + e.getMessage()); e.printStackTrace(); } %> </body> </html> Please help me where I am going wrong.

    Read the article

  • Advice on software / database design to avoid using cursors when updating database

    - by Remnant
    I have a database that logs when an employee has attended a course and when they are next due to attend the course (courses tend to be annual). As an example, the following employee attended course '1' on 1st Jan 2010 and, as the course is annual, is due to attend next on the 1st Jan 2011. As today is 20th May 2010 the course status reads as 'Complete' i.e. they have done the course and do not need to do it again until next year: EmployeeID CourseID AttendanceDate DueDate Status 123456 1 01/01/2010 01/01/2011 Complete In terms of the DueDate I calculate this in SQL when I update the employee's record e.g. DueDate = AttendanceDate + CourseFrequency (I pull course frequency this from a separate table). In my web based app (asp.net mvc) I pull back this data for all employees and display it in a grid like format for HR managers to review. This allows HR to work out who needs to go on courses. The issue I have is as follows. Taking the example above, suppose today is 2nd Jan 2011. In this case, employee 123456 is now overdue for the course and I would like to set the Status to Incomplete so that the HR manager can see that they need to action this i.e. get employee on the course. I could build a trigger in the database to run overnight to update the Status field for all employees based on the current date. From what I have read I would need to use cursors to loop over each row to amend the status and this is considered bad practice / inefficient or at least something to avoid if you can??? Alternatively, I could compute the Status in my C# code after I have pulled back the data from the database and before I display it on screen. The issue with this is that the Status in the database would not necessarily match what is shown on screen which just feels plain wrong to me. Does anybody have any advice on the best practice approach to such an issue? It helps, if I did use a cursor I doubt I would be looping over more than 1000 records at any given time. Maybe this is such small volume that using cursors is okay?

    Read the article

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >