Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 174/1255 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Persisting objects to a database using a loop

    - by ChaoticLoki
    I have a form that returns multiple values and each value I would like to store in a database, I created a test form for this purpose <form method="post" action="{{ path('submit_exercise', {'page': 1}) }}"> <input type="hidden" name="answers[]" value="Answer 1" "/> <input type="hidden" name="answers[]" value="Answer 2" "/> <input type="hidden" name="answers[]" value="Answer 3" "/> <input type="hidden" name="answers[]" value="Answer 4" "/> <input type="hidden" name="answers[]" value="Answer 5" "/> <input type="hidden" name="answers[]" value="Answer 6" "/> <input type="hidden" name="answers[]" value="Answer 7" "/> <input type="submit" name="submit" /> </form> </body> </html> My submit answers Action is currently written like so. public function submitAnswersAction($page) { //get submitted data $data = $this->get('request')->request->all(); $answers = $data['answers']; //get student ID $user = $this->get('security.context')->getToken()->getUser(); $studentID = $user->getId(); //Get Current time $currentTime = new \DateTime(date("Y-m-d H:m:s")); //var_dump($answers); //var_dump($studentID); //var_dump($currentTime); for($index = 0; $index < count($answers); $index++) { /*echo "Question ". ($index + 1) ."<br />"; echo "Student ID: ". $studentID."<br />"; echo "Page Number: $page <br />"; echo "Answer: $answers[$index]"."<br />"; echo "<br />";*/ $studentAnswer = new StudentAnswer(); $studentAnswer->setStudentID($studentID); $studentAnswer->setPageID($page); $studentAnswer->setQuestionID($index+1); $studentAnswer->setAnswer($answers[$index]); $studentAnswer->setDateCreated($currentTime); $studentAnswer->setReadStatus(0); $database = $this->getDoctrine()->getManager(); $database->persist($studentAnswer); $database->flush(); } return new Response('Answers saved for Student: '.$user->getFullName().' for page: '.$page); When I do a var_dump everything seems to be associated correctly, meaning that the answers array is populated with the right data and so is every other variable, my problem is actually persisting it to the database. when run it returns with this error which seems to me like it doesn't know what variables to put into the row. An exception occurred while executing 'INSERT INTO Student_Answer (student_id, page_id, question_id, answer, read, date_created) VALUES (?, ?, ?, ?, ?, ?)' with params {"1":2,"2":"1","3":1,"4":"Answer 1","5":0,"6":"2012-12-11 12:12:20"}: Any help would be greatly appreciated as this is a personal project to help me try and understand web development a bit more.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • How do I sell Oracle? It all starts with the database

    - by swiesner
    Partner sales reps often ask me how they start a conversation with their customers around Oracle. The first question should be, "are you running an Oracle database?" Much of what we do at Oracle is intended to optimize our customers' investments in the Oracle database. Many of our acquisitions, new features in existing products, new applications, are often designed to improve the experience of the 400,000 Oracle database customers worldwide. Once you find an Oracle database customer, the next set of questions are much easier, but depend on your expertise: License Management, Upsell, and Services. Let's start with License Management: Have there been any changes to your infrastructure since you purchased the database licenses? Have you upgraded the servers on which the Oracle database is running? Have the number of users or employees increased since the last license purchase? Yes to any of these questions will lead to investigating correct licensing. The goal is to provide a "soft" license review. Oracle generally does not require any license keys to install our software, so we need to help our customers with compliance. Correct licensing is essential to managing costs, and can provide a great way to efficiently manage IT spend. You might want to contact your local Oracle Channel Manager or VAD for licensing assistance. Or, review the Software Investment Guide. Naturally, these questions can lead to upsell opportunities. If your customer has invested in Oracle technology to manage their data, that data is essential to running their business. We'll take a look at those upsell questions in a later blog post.

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Hibernate: Check if object exists/changed

    - by swalkner
    Assuming I have an object Person with long id String firstName String lastName String address Then I'm generating a Person-object somewhere in my application. Now I'd like to check if the person exists in the database (= firstname/lastname-combination is in the database). If not = insert it. If yes, check, if the address is the same. If not = update the address. Of course, I can do some requests (first, try to load object with firstname/lastname), then (if existing), compare the address. But isn't there a simpler, cleaner approach? If got several different classes and do not like to have so many queries. I'd like to use annotations as if to say: firstname/lastname = they're the primary key. Check for them if the object exists. address is the parameter you have to compare if it stayed the same or not. Does Hibernate/JPA (or another framework) support something like that? pseude-code: if (database.containsObject(person)) { //containing according to compound keys if (database.containsChangedObject(person)) { database.updateObject(person); } } else { database.insertObject(person); }

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • Cannot connect to MySQL server using JSP

    - by Dibya
    I just set foot on JSP. I started writing simple programs to display dates, system info. Then I tried to connect a MySQL database I have a free hosting account, but I am not able to connect to MySQL database. Here is my code: <%@ page import="java.sql.*" %> <%@ page import="java.io.*" %>  <html> <head> <title>Connection with mysql database</title> </head> <body> <h1>Connection status</h1> <% try { String connectionURL = "jdbc:mysql://mysql2.000webhost.com/a3932573_product"; Connection connection = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); connection = DriverManager.getConnection(connectionURL, "a3932573_dibya", "******"); if(!connection.isClosed()) out.println("Successfully connected to " + "MySQL server using TCP/IP..."); connection.close(); }catch(Exception ex){ out.println("Unable to connect to database."); } %> </font> </body> </html> I am getting Message as Connection Status unable to connect to database. I have tested this connection using PHP using the same username, password and database name. Where am I making mistake?

    Read the article

  • EGODatabase does not retain inserted record

    - by user135775
    Hi Guy! I'm using EGODatabase to process my data. When I tried to insert data into the database which I created with Base and attached to the project, the data that was inserted programmatically will not show up when I restart my app.However the data inserted manually using Base will show up just fine. Following is the code for inserting data into the database: NSString *path = [[NSBundle mainBundle] pathForResource:@"Database" ofType:@"db"]; db = [EGODatabase databaseWithPath:path]; if ([db open]) { NSLog(@"Database is opened successfully!"); } else { NSLog(@"Database is failed to open!"); } NSArray *params = [NSArray arrayWithObjects:[aFeed feedID], [aFeed title], nil]; [db executeUpdate:@"Insert INTO Feed (FeedID, Title) VALUES (?, ?)" parameters:params]; EGODatabaseResult *results = [db executeQuery:@"SELECT * FROM Feed"]; for ( EGODatabaseRow * row in results) { NSLog(@"ID: %@ Title: %@", [row stringForColumn:@"FeedID"], [row stringForColumn:@"Title"]); } I appreciate if anyone can share me a working sample code for EGODatabase. I'm using EGODatabase because I might have multi-threading code that write to the database. Thank in advance.

    Read the article

  • PostgreSQL lots of Tables

    - by strife911
    Hi, we am at a point where I have more than a thousand Tables in our PostgreSQL database server. I remember reading that there was a way to speed up the database once it reached more than a thousand Tables, but I cannot seem to find any mention of this on the Web with Google. Any help would be nice. Thanks

    Read the article

  • Nagios NDOUtils config_type field

    - by danilo
    Can someone tell me what the config_type field in the nagios_hosts table of the Nagios NDOUtils Database means? Unfortuntely, it is not included in the database model documentation. http://nagios.sourceforge.net/docs/ndoutils/NDOUtils_DB_Model.pdf

    Read the article

  • Nagios NDOUtils config_type field

    - by danilo
    Can someone tell me what the config_type field in the nagios_hosts table of the Nagios NDOUtils Database means? Unfortuntely, it is not included in the database model documentation. http://nagios.sourceforge.net/docs/ndoutils/NDOUtils_DB_Model.pdf

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • cluster live postgres 8.3 server

    - by bobinabottle
    Our web application is getting more and more traffic, which is making our poor pg8.3 database server have a little trouble keeping up. I've had a look into using pgpool II for clustering the db to relieve a little strain, and I was wondering how this should be done to minimise downtime considering I would be clustering a live database. Has anyone had experience with this or know of any guides to follow? Cheers :)

    Read the article

  • Do I need to recreate statistics if I had to drop them to add a foreign key

    - by Adam J.R. Erickson
    I have a database which had all it's foreign-key relationships dropped at some unknown time in the past (don't ask). I have an old copy of the database which isn't good to restore from, but the schema has the relationships. I'm working from that to create a script to restore the keys. In updating the tables, I've had to drop statistics from several tables. Do I need to manually recreate those, or can I just run the statistics update procedure when all the tables are updated?

    Read the article

  • Anyone heard of a custom report builder program?

    - by user19189
    Hi, I'm looking for a program to create and store custom reports. What I want to be able to do is build a report by adding fields for end users to fill out and then have the program create the appropriate database (or update the appropriate database table rows). So, just a simple report that can be created entirely by an end user from the front end. Thanks in advance

    Read the article

  • High availability for databases (DRBD + GFS)?

    - by EvanAlm
    Does it work to have like an MySQL (or any other relational database) on the GFS (with DRBD) and have multiple nodes reading and writing to it? Is that the "best" way of providing a HA database/application setup if so? Is RHEL (cluster suite) a good way to set up this? (or centos)

    Read the article

  • Creating a very detailed report using Excel and/or Access

    - by AgainstClint
    I have a database/spreadsheet of information that I need to make a very detailed report from. My knowledge on Access is quite limited so I started doing a mock up of the Report layout in Excel and made this: So from there, I need the information from the data base to be placed in the properly labeled cells. There are over 2500+ entries in the spreadsheet/database, so if creating a report that looks pretty spot on to the one above in access is doable, that might be an easier route.

    Read the article

  • SQL Server suddenly using only a small portion of CPU.

    - by hermiod
    We've got a Windows 2008 R2 server running SQL Server 2008. All of a sudden, the SQLServer process is refusing to go above 20% CPU usage. As of last week, when running a heavy query against the db it would rise to 100% usage as I would expect. We've had this server for a while and it seems strange that it would just suddenly have this limit. This limit is causing our queries to take a lot longer than they normally would. No one has (knowingly at least) made any changes to the server configuration. After a bit of investigation, I discovered the sys.dm_os_sys_memory view. This shows 'available physical memory is high' bu at the same time the available physical memory is 339552kb where as the total is 4193848kb. It is worth noting that this is a virtual server running on vmware. Is there a setting somewhere with in SQL Server that sets the maximum CPU usage? I've found the settings in resource governor, although this is currently off as it always has been. We have recently started using Spotlight for SQL Server by Quest Software. It's playback database was located on this server for a short time this morning, I first noticed the problem shortly afterwards, although I hadn't been doing any queries prior to this so I don't know if this is the point at which the problem began, however the database was working as expected on Friday afternoon. The Windows log shows that the following settings were applied to the SpotlightPlaybackDatabase when it was created. 02/21/2011 08:45:02,spid60,Unknown,Setting database option TORN_PAGE_DETECTION to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option MULTI_USER to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option READ_WRITE to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_UPDATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CREATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option ANSI_WARNINGS to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option CONCAT_NULL_YIELDS_NULL to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option RECOVERY to SIMPLE for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option QUOTED_IDENTIFIER to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CLOSE to OFF for database SpotlightPlaybackDatabase. Could any of these settings changes modified the settings applied to the whole server?

    Read the article

  • SQL Server 2008 Restore hangs on 100%

    - by CL4NCY
    Hi, I have backed up a large database from SQL 2005 and am trying to restore it to a SQL 2008 database. It seems to work ok until it gets to 100% when it hangs indefinitely. I've managed to restore smaller databases to this server ok. Any ideas?

    Read the article

  • Apache-Mina FTPServer Issue — unable to login into apache ftp server while using database user manager

    - by piyush
    I am unable to login into apache ftp server while using database user manager: while entering username and password,I am getting following error in log file: [ INFO] 2013-02-07 20:51:07,779 [] [0:0:0:0:0:0:0:1] RECEIVED: USER piyush [ INFO] 2013-02-07 20:51:07,781 [piyush] [0:0:0:0:0:0:0:1] SENT: 331 User name okay, need password for piyush. [ INFO] 2013-02-07 20:51:07,784 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: PASS ***** [ WARN] 2013-02-07 20:51:07,785 [piyush] [0:0:0:0:0:0:0:1] User failed to log in [ WARN] 2013-02-07 20:51:08,285 [piyush] [0:0:0:0:0:0:0:1] Login failure - piyush [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] SENT: 530 Authentication failed. [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: QUIT [ INFO] 2013-02-07 20:51:08,290 [piyush] [0:0:0:0:0:0:0:1] SENT: 221 Goodbye. [ INFO] 2013-02-07 20:51:08,291 [piyush] [0:0:0:0:0:0:0:1] CLOSED here is my xml file ftpd-typical.xml: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <server xmlns="http://mina.apache.org/ftpserver/spring/v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xsi:schemaLocation=" http://mina.apache.org/ftpserver/spring/v1 http://mina.apache.org/ftpserver/ftpserver-1.0.xsd " id="Prometheus"> <listeners> <nio-listener name="default" port="2121" /> </listeners> <db-user-manager encrypt-passwords="salted"> <data-source> <beans:bean class="org.apache.commons.dbcp.BasicDataSource" > <beans:property name="driverClassName" value="com.mysql.jdbc.Driver" /> <beans:property name="url" value="jdbc:mysql://localhost/apache_test" /> <beans:property name="username" value="amy" /> <beans:property name="password" value="piyush" /> </beans:bean> </data-source> <insert-user>INSERT INTO FTP_USER (userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate) VALUES ('{userid}', '{userpassword}', '{homedirectory}', {enableflag}, {writepermission}, {idletime}, {uploadrate}, {downloadrate}) </insert-user> <update-user>UPDATE FTP_USER SET userpassword='{userpassword}',homedirectory='{homedirectory}',enableflag={enableflag},writepermission={writepermission},idletime={idletime},uploadrate={uploadrate},downloadrate={downloadrate} WHERE userid='{userid}' </update-user> <delete-user>DELETE FROM FTP_USER WHERE userid = '{userid}' </delete-user> <select-user>SELECT userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate, maxloginnumber, maxloginperip FROM FTP_USER WHERE userid = '{userid}' </select-user> <select-all-users>SELECT userid FROM FTP_USER ORDER BY userid </select-all-users> <is-admin>SELECT userid FROM FTP_USER WHERE userid='{userid}' AND userid='admin' </is-admin> <authenticate>SELECT userpassword from FTP_USER WHERE userid='{userid}'</authenticate> </db-user-manager> </server>

    Read the article

  • how to troubleshoot using rsyslog to output to a mysql database

    - by ChrisNZ
    Using FreeBSD 8.0 32 bit. I have installed rsyslogd 5.5.5 with ommysql. (installed ports /usr/ports/sysutils/rsyslog55 and /usr/ports/sysutils/rsyslog55-mysql) My rsyslog.conf file looks like: $ModLoad imudp $ModLoad imtcp $ModLoad ommysql $ModLoad immark.so $ModLoad imuxsock.so $ModLoad imklog.so $OptimizeForUniprocessor on $AllowedSender UDP, 10.0.0.0/8 $UDPServerAddress 0.0.0.0 $UDPServerRun 514 $UDPServerTimeRequery 2 # +SG560 *.* :ommysql:127.0.0.1,Syslog,sysloguser,mypassword My command line flags for rsyslogd are: -c5 -4 Checking the code with -c5 -N1 returns no errors. I have confirmed that rsyslogd is working by changing the last line to say: *.* /var/log/snapgear.log which results in messages appearing in the snapgear.log file. So it is probably something to do with my MySQL setup If I do: mysql -u sysloguser -p Syslog Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 56 Server version: 5.0.86 FreeBSD port: mysql-server-5.0.86 mysql> select * from SystemEvents; Empty set (0.00 sec) mysql> :-( I have confirmed that sysloguser has full privileges for the Syslog database. If I run rsyslogd on the console in debug mode: /usr/local/sbin/rsyslogd -f /usr/local/etc/rsyslog.conf -c5 -n -d I can see this sequence of events each time a message is received: 9244.376687256:28359280: main Q: entry added, size now log 1, phys 1 entries 9244.376705694:28359280: main Q: EnqueueMsg advised worker start 9244.376726647:28359280: Listening on UDP syslogd socket 4 (IPv4/port 514). 9244.376728602:28359280: --------imUDP calling select, active file descriptors (max 4): 4 9244.376890075:283593c0: wti 0x28306e80: worker awoke from idle processing 9244.376892031:283593c0: we deleted 0 objects and enqueued 0 objects 9244.376893986:283593c0: delete batch from store, new sizes: log 1, phys 1 9244.376895942:283593c0: msgConsumer processes msg 0/1 9244.376897898:283593c0: msg parser: flags 70, from '~NOTRESOLVED~', msg 'Jun 29 17:32:24 SG560 kernel: (20000629T1732244' 9244.376900132:283593c0: parse using parser list 0x283080e8 (the default list). 9244.376902088:283593c0: dropped LF at very end of message (DropTrailingLF is set) 9244.376904044:283593c0: Parser 'rsyslog.rfc5424' returned -2160 9244.376905999:283593c0: Message will now be parsed by the legacy syslog parser (one size fits all... ;)). 9244.376907955:283593c0: Parser 'rsyslog.rfc3164' returned 0 9244.376909910:283593c0: testing filter, f_pmask 255 9244.376911866:283593c0: Called action, logging to ommysql 9244.376918012:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376919967:283593c0: action call returned -2123 9244.376921923:283593c0: tryDoAction: unexpected error code -2123, finalizing 9244.376926113:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376928069:283593c0: ruleset: get iRet 0 from rule.ProcessMsg() 9244.376930024:283593c0: ruleset.ProcessMsg() returns 0 9244.376931980:283593c0: regular consumer finished, iret=0, szlog 0 sz phys 1 9244.376933936:283593c0: XXX: enqueueing data element 0 of 1 9244.376935891:283593c0: we deleted 1 objects and enqueued 0 objects 9244.376938126:283593c0: delete batch from store, new sizes: log 0, phys 0 9244.376940082:283593c0: regular consumer finished, iret=4, szlog 0 sz phys 0 9244.376942037:283593c0: main Q:Reg/w0: worker IDLE, waiting for work. .... I can see the Action Call to ommysql returns unexpected error code -2123 Now I am stuck! Any ideas on what to look for next? Perhaps I there are extra ports I need to install? I will be very grateful for any assistance here!

    Read the article

  • Importing Sql Server 2005 database into Sql Server express 2008

    - by Matthew Kanwisher
    Is there any way to import a database backup from 2005 into 2008 express edition. What I've had to resort to is doing a script the database, then import all the data through DTS. Whenever I tried to import straight from a backup file it says something about not being to import into a new version of sql server or I'll get the below error. title: Microsoft SQL Server Management Studio Specified cast is not valid. (SqlManagerUI)

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >