Search Results

Search found 30300 results on 1212 pages for 'temporal database'.

Page 176/1212 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Ruby on Rails login using legacy user database

    - by ricsmania
    Hello, I have a Rails application that connects to a legacy database (Oracle) and displays some information from a particular user. Right now the user is passed as a URL parameter, but this has obvious security issues because users should only be able to see their own data. To solve that, I want to implement a user login, and I did some research and came across 2 components for that, restful_authentication and authlogic. The problem is that I need to use an existing user/password database instead of creating a new one, which is the common way to use those components. The password is encrypted by a custom Oracle package, but let's assume it is stored as plain text to make things simpler. I only need very basic functionality, which is login a user and keep them logged in forever until logout. No changes to the database will be made by this application, so there's no need for sign up, e-mail activation, reset password, etc. Can someone point me in the right direction on how to do that? Is any of those 2 components a good solution? If not, what would be recommended? Thanks!

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Trouble tunneling my local Wordpress install to the mysql database on appfog

    - by alanmoo
    I've set up a wordpress install on appfog (using rackspace), and cloned the install to my local machine for development. I know the install works (using MAMP) because I created a local mysql database and changed wp-config.php to point to it. However, I want to develop without having to change wp-config.php every time I commit. After doing some research, it seems like the Appfog service Caldecott lets me tunnel into the mysql database on the server, using af tunnel. Unfortunately, I'm having issues with getting it working. Even if I change my MAMP mysql port to something like 8889, and tunnel mysql through port 3306, it looks like it's connected but I still get "Error establishing a database connection" when loading my localhost Wordpress. When I quit the mysql monitor (using ctrl+x, ctrl+c), I get a message stating "Error: 'mysql' execution failed; is it in your $PATH?'. Originally, no, it wasn't, but I've fixed my PATH variable on my local machine so that when I go to Terminal and just type mysql, it loads up. So I guess my question is 2 parts: 1.)Am I going with the right approach for Wordpress development on my local machine and 2.)If so, why is the tunnel not working?

    Read the article

  • Selecting Date Range on a PHP form and displaying results from MySQL database

    - by Sarah HSL
    This may be something simple but I cant understand why this wouldn't work.. I have a php form where you can select a date range from drop downs. I've given the field names day, month year, and day1, month1, year1. When clicking submit it takes you to a second php form. Here is the code for second form: <?php $username="***"; $password="***"; $database="****"; mysql_connect('localhost',$username,$password); @mysql_select_db($database) or die( "Unable to select database"); $day = $_GET['day']; $month = $_GET['month']; $year = $_GET['year']; $day1 = $_GET['day1']; $month1 = $_GET['month1']; $year1 = $_GET['year1']; $date1 = "$year-$month-$day"; $date2 = "$year1-$month1-$day1"; $query = "SELECT * FROM main_stock WHERE curr_timestamp BETWEEN '$date1' AND '$date2'"; $result=mysql_query($query); $num=mysql_num_rows($result); ?> <table border="1" cellspacing="2" cellpadding="2"> <tr> <td><b><font face="Arial, Helvetica, sans-serif">Product Description</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Category</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Master Category</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Barcode</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Status</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">TimeStamp</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">New Own</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Serial No.</font></b></td> </tr> <?php $i=0; while ($i < $num) { $f1=mysql_result($result,$i,"product_desc"); $f2=mysql_result($result,$i,"category"); $f3=mysql_result($result,$i,"mastercategory"); $f4=mysql_result($result,$i,"barcode"); $f5=mysql_result($result,$i,"status"); $f6=mysql_result($result,$i,"curr_timestamp"); $f7=mysql_result($result,$i,"newown"); $f8=mysql_result($result,$i,"serial"); ?> <tr> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f1; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f2; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f3; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f4; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f5; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f6; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f7; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f8; ?></font></td> </tr> <?php $i++; } $num_rows = mysql_num_rows($result); echo "$num_rows Rows\n"; mysql_close(); ?> Is there any reason this wouldn't work? I'm not sure where I am going wrong. It displays results when there is another option as well as the date such as 'status' but when this is taken out and I just want to display all the results between the date range it doesn't work.. This works: <?php $username="+++"; $password="+++"; $database="+++"; mysql_connect('localhost',$username,$password); @mysql_select_db($database) or die( "Unable to select database"); $day = $_GET['day']; $month = $_GET['month']; $year = $_GET['year']; $day1 = $_GET['day1']; $month1 = $_GET['month1']; $year1 = $_GET['year1']; $status = $_GET['status']; $date1 = "$year-$month-$day"; $date2 = "$year1-$month1-$day1"; $query = "SELECT * FROM main_stock WHERE status = '$status' AND curr_timestamp BETWEEN '$date1' AND '$date2'"; $result=mysql_query($query); $num=mysql_num_rows($result); ?> <table border="1" cellspacing="2" cellpadding="2"> <tr> <td><b><font face="Arial, Helvetica, sans-serif">Product Description</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Category</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Master Category</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Barcode</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Status</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">TimeStamp</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">New Own</font></b></td> <td><b><font face="Arial, Helvetica, sans-serif">Serial No.</font></b></td> </tr> <?php $i=0; while ($i < $num) { $f1=mysql_result($result,$i,"product_desc"); $f2=mysql_result($result,$i,"category"); $f3=mysql_result($result,$i,"mastercategory"); $f4=mysql_result($result,$i,"barcode"); $f5=mysql_result($result,$i,"status"); $f6=mysql_result($result,$i,"curr_timestamp"); $f7=mysql_result($result,$i,"newown"); $f8=mysql_result($result,$i,"serial"); ?> <tr> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f1; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f2; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f3; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f4; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f5; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f6; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f7; ?></font></td> <td><font face="Arial, Helvetica, sans-serif"><?php echo $f8; ?></font></td> </tr> <?php $i++; } $num_rows = mysql_num_rows($result); echo "$num_rows Rows\n"; mysql_close(); ?> But when the 'status' field is taken out (and obviously the serial drop down in the first form) it stops working...

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • No database connection when trying to use IIS locally with asp.net MVC 1.0

    - by mark4asp
    Login failed for user ''. The user is not associated with a trusted SQL Server connection. When I try to use IIS locally instead of Cassini I get this error. The ASP.NET MVC 1.0 site is running on WinXP. The database is local and has SQL Server and Windows Authentification mode enabled. The website runs OK using Cassini, with the same connection string. It fails when I try to use IIS instead of Cassini. These permissions are set on the Virtual directory which IIS points to. ASP.NET Machine Account [Full Control] Internet Guest Account [Full Control] System [Full Control] This virtual directory is the same are the directory holding my project files. I am using Linq and the database connection string is stored in the App.config file of my data project. I get the same error whether I set the connection string to use Windows or Sql server authentification. My sql server has both [MyMachineName\ASPNET] and SqlServerUser Logins and a User on the database. CREATE LOGIN [MyMachineName\ASPNET] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english] Use My_database CREATE USER [MyMachineName\ASPNET] FOR LOGIN [MyMachineName\ASPNET] WITH DEFAULT_SCHEMA=[dbo] CREATE LOGIN [MwMvcLg] WITH PASSWORD=N'blahblah', DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[British], CHECK_EXPIRATION=OFF, CHECK_POLICY=ON Use My_database CREATE USER [MwMvcLg] FOR LOGIN [MwMvcLg] WITH DEFAULT_SCHEMA=[dbo] How come I have no problem running this website on IIS6 remotely. Why does IIS5.1, running locally, need these extra logins? PS: My overwhelming preference is to use Sql Server authentification - as this is how it runs when deployed.

    Read the article

  • upload new file first check if this file exist already in database or not then if not exist save that in database

    - by Hala Qaseam
    I'm trying to create sql database that contains Image Id (int) Imagename (varchar(50)) Image (image) and in aspx write in upload button this code: protected void btnUpload_Click(object sender, EventArgs e) { //Condition to check if the file uploaded or not if (fileuploadImage.HasFile) { //getting length of uploaded file int length = fileuploadImage.PostedFile.ContentLength; //create a byte array to store the binary image data byte[] imgbyte = new byte[length]; //store the currently selected file in memeory HttpPostedFile img = fileuploadImage.PostedFile; //set the binary data img.InputStream.Read(imgbyte, 0, length); string imagename = txtImageName.Text; //use the web.config to store the connection string SqlConnection connection = new SqlConnection(strcon); connection.Open(); SqlCommand cmd = new SqlCommand("INSERT INTO Image (ImageName,Image) VALUES (@imagename,@imagedata)", connection); cmd.Parameters.Add("@imagename", SqlDbType.VarChar, 50).Value = imagename; cmd.Parameters.Add("@imagedata", SqlDbType.Image).Value = imgbyte; int count = cmd.ExecuteNonQuery(); connection.Close(); if (count == 1) { BindGridData(); txtImageName.Text = string.Empty; ScriptManager.RegisterStartupScript(this, this.GetType(), "alertmessage", "javascript:alert('" + imagename + " image inserted successfully')", true); } } } When I'm uploading a new image I need to first check if this image already exists in database and if it doesn't exist save that in database. Please how I can do that?

    Read the article

  • Error in inserting data into database

    - by Matthew
    When I try to run this code it will not insert the data into the database? <?php class Database { private $dsn; function __construct($dbname, $host, $user, $password, $enckey) { $this->dsn = "mysql:dbname=" . $dbname . ';host=' . $host; $this->user = $user; $this->password = $password; } private function createDSN() { return $this->dsn; } public function createConnection() { try { $dbh = new PDO(self::createDSN(), $this->user, $this->password); } catch (PDOException $e) { echo 'Connection failed: ' . $e->getMessage(); } return $dbh; } } $db = new Database('mytest', 'localhost', 'root', 'hashedpassword', null); $dbh = $db->createConnection(); $sql = $dbh->prepare("INSERT INTO contacts (firstname, lastname) VALUES (?,?)"); $sql->execute(array("abc", "xyz")); ?>

    Read the article

  • entering datas of php page to database

    - by Rahima farzana.S
    I have a php form with two text boxes and i want to enter the text box values into the database. I have created the table (with two columns namely webmeasurementsuite id and webmeasurements id) I used the following syntax for creating table: CREATE TABLE `radio` ( `webmeasurementsuite id` INT NOT NULL, `webmeasurements id` INT NOT NULL ); Utilising the tutorial in the following link, I wrote the php coding but unfortunately the datas are not getting entered into the database. I am getting an error in the insert sql syntax. I checked it but i am not able to trace out the error.Can anyone correct me? I got the coding from http://www.webune.com/forums/php-how-to-enter-data-into-database-with-php-scripts.html $sql = "INSERT INTO $db_table(webmeasurementsuite id,webmeasurements id) values ('".mysql_real_escape_string(stripslashes($_REQUEST['webmeasurementsuite id']))."','".mysql_real_escape_string(stripslashes($_REQUEST['webmeasurements id']))."')"; echo$sql; My error is as follows: INSERT INTO radio(webmeasurementsuite id,webmeasurements id) values ('','')ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'id,webmeasurements id) values ('','')' at line 1

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • PHP: Proper way of using a PDO database connection in a class

    - by Cortopasta
    Trying to organize all my code into classes, and I can't get the database queries to work inside a class. I tested it without the class wrapper, and it worked fine. Inside the class = no dice. What about my classes is messing this up? class ac { public function dbConnect() { global $dbcon; $dbInfo['server'] = "localhost"; $dbInfo['database'] = "sn"; $dbInfo['username'] = "sn"; $dbInfo['password'] = "password"; $con = "mysql:host=" . $dbInfo['server'] . "; dbname=" . $dbInfo['database']; $dbcon = new PDO($con, $dbInfo['username'], $dbInfo['password']); $dbcon->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $error = $dbcon->errorInfo(); if($error[0] != "") { print "<p>DATABASE CONNECTION ERROR:</p>"; print_r($error); } } public function authentication() { global $dbcon; $plain_username = $_POST['username']; $md5_password = md5($_POST['password']); $ac = new ac(); if (is_int($ac->check_credentials($plain_username, $md5_password))) { ?> <p>Welcome!</p> <!--go to account manager here--> <?php } else { ?> <p>Not a valid username and/or password. Please try again.</p> <?php unset($_POST['username']); unset($_POST['password']); $ui = new ui(); $ui->start(); } } private function check_credentials($plain_username, $md5_password) { global $dbcon; $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } } And if it's any help, here's the class that's calling it: require_once("ac/acclass.php"); $ac = new ac(); $ac->dbconnect(); class ui { public function start() { if ((!isset($_POST['username'])) && (!isset($_POST['password']))) { $ui = new ui(); $ui->loginform(); } else { $ac = new ac(); $ac->authentication(); } } private function loginform() { ?> <form id="userlogin" action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> User:<input type="text" name="username"/><br/> Password:<input type="password" name="password"/><br/> <input type="submit" value="submit"/> </form> <?php } }

    Read the article

  • Add User to Database not working

    - by user1850189
    I'm really new to ASP.net and I am currently trying to create a registration page on a site. I was successful in adding a user to the database but I decided to add another feature into the code to check what userID's were available. For example, if a user deleted their account their userID would become available for use again. I'm trying to find the min value and the max value and add or subtract 1 depending on whether it is min or max. I can run the code I have written for this with no errors but the user is not added to the database. Can anyone help me figure out what I'm missing from my code to do this? EDIT Code adds a user to database but it adds the new user at -1 instead. I don't seem to be able to see where the issue is. If (aDataReader2.Read() = False) Then aConnection1 = New OleDbConnection(aConnectionString) aConnection1.Open() aQuery = "Insert Into UserDetails " aQuery = aQuery & "Values ('" & userID & "','" & userFName & "','" & userLName & "','" & userEmail & "','" & userUsername & "','" & userPassword & "')" aCommand = New OleDbCommand(aQuery, aConnection1) aCommand.ExecuteNonQuery() aConnection1.Close() ElseIf (min = 1) Then aConnection2 = New OleDbConnection(aConnectionString) aConnection2.Open() aCommand = New OleDbCommand(aQuery3, aConnection2) aDataReader2 = aCommand.ExecuteReader() userID = max + 1 aQuery = "Insert Into UserDetails " aQuery = aQuery & "Values ('" & userID & "','" & userFName & "','" & userLName & "','" & userEmail & "','" & userUsername & "','" & userPassword & "')" aCommand = New OleDbCommand(aQuery, aConnection2) aCommand.ExecuteNonQuery() aConnection2.Close() Else aConnection3 = New OleDbConnection(aConnectionString) aConnection3.Open() aCommand = New OleDbCommand(aQuery2, aConnection3) aDataReader2 = aCommand.ExecuteReader userID = min - 1 aQuery = "Insert Into UserDetails " aQuery = aQuery & "Values ('" & userID & "','" & userFName & "','" & userLName & "','" & userEmail & "','" & userUsername & "','" & userPassword & "')" aCommand = New OleDbCommand(aQuery, aConnection3) aCommand.ExecuteNonQuery() aConnection3.Close() lblResults.Text = "User Account successfully created" btnCreateUser.Enabled = False End If Here's the code I used to get the max and min values from the database. I'm getting a value of 0 for both of them - when min should be 1 and max should be 5 Dim minID As Integer Dim maxID As Integer aQuery2 = "Select Min(UserID) AS '" & [minID] & "' From UserDetails" aQuery3 = "Select Max(UserID) AS ' " & [maxID] & "' From UserDetails"

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Oracle on Windows / .NET ??(2010?12?)

    - by Yusuke.Yamamoto
    Oracle Database ? Windows Server / .NET ???????????????????????????????????????? 12?~1???????????????? Oracle on Windows / .NET ???????????????! ???????????????????? ?? ????? Windows Server / .NET ???????? Oracle Database ? Windows Server Oracle Database ? .NET Oracle on Windows / .NET ?????? ????? Windows Server / .NET ???????? Oracle=Linux / UNIX ?? ?Oracle Database ????? Linux / UNIX ?????????????????????? ???????? Windows RDBMS ?????????????????? Oracle on Windows ???No.1???!2,000???????! ????????????????????????????????????????????????? ???????????????? Windows ???????&?????????!? ?????????Windows Server ?????????UNIX ? Oracle Database ?????????????????????????(????????·??????)? ????Windows Server ???(Active Directory, MSCS, VSS, etc)????????????????????? ????????1????!Windows Server 2008??Oracle Database 11g???????? ???.NET ??????????????????????? Oracle Data Provider for .NET ????????Oracle Database ???·?????????????????????? ???????/???1????!.NET + Oracle Database 11g ???????????? Oracle Database ? Windows Server / .NET ?????????????????????????? Oracle on Windows / .NET ????????·Tips??????????????! Oracle Database ? Windows Server Windows ?????? Oracle Database 11g Release 2 ??????|????????????????????! Oracle Database 11g Release 2 ????? ???:??????|??????|???????? OTN Windows Server System Center Windows ? Oracle Database ??? " ?????????????? SQL Server ????? / SQL Server ?????? ???!?SQL Server????????????????(??) SQL Server ?? Oracle Database ????????????? ??? SQL Server ??????????????????????????????????? " ?????????????? Oracle Database ? .NET .NET ?????? Oracle Data Access Components(ODAC) ??????|????????????????????! .NET and Windows Application Development ????? ???:.NET??? OTN .NET Developer Center .NET ? Oracle Database ??????????? " ?????????????? Visual Studio ?? Oracle Database ?????????? " ?????????????? Oracle on Windows / .NET ?????? ???????????????????? ????(Oracle Direct Seminar)????????????????????????????????????????? ??????????? View RSS feed ?????

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • New channels for Exadata 11.2.3.1.1

    - by Rene Kundersma
    With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates will be done using ULN and YUM. For the 11.2.3.1.0 release the ULN exadata_dbserver_11.2.3.1.0_x86_64_base channel was made available and Exadata operators could subscribe their system to it via linux.oracle.com. With the new 11.2.3.1.1 release two additional channels are added: a 'latest' channel (exadata_dbserver_11.2_x86_64_latest) a 'patch' channel (exadata_dbserver_11.2_x86_64_patch) The patch channel has the new or updated packages updated in 11.2.3.1.1 from the base channel. The latest channel has all the packages from 11.2.3.1.0 base and patch channels combined.  From here there are three possible situations a Database Server can be in before it can be updated to 11.2.3.1.1: Database Server is on Exadata release < 11.2.3.1.0 Database Server is patched to 11.2.3.1.0 Database Server is freshly imaged to 11.2.3.1.0 In order to bring a Database Server to 11.2.3.1.1 for all three cases the same approach for updating can be used (using YUM), but there are some minor differences: For Database Servers on a release < 11.2.3.1.0 the following high-level steps need to be performed: Subscribe to el5_x86_64_addons, ol5_x86_64_latest and  exadata_dbserver_11.2_x86_64_latest Create local repository Point Database Server to the local repository* install the update * during this process a one-time action needs to be done (details in the README) For Database Servers patched to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local repository Point Database Server to the local repository Update the system For Database Servers freshly imaged to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local  repository Point Database Server to the local repository Update the system The difference between 'situation 2' (Database Server is patched to 11.2.3.1.0) and 'situation 3' (Database Server is freshly imaged to 11.2.3.1.0) is that in situation 2 the existing Exadata-computenode.repo file needs to be edited while in situation 3 this file is not existing  and needs to be created or copied. Another difference is that you will end up with more OFA packages installed in situation 2. This is because none are removed during the updating process.  The YUM update functionality with the new channels is a great enhancements to the Database Server update procedure. As usual, the updates can be done in a rolling fashion so no database service downtime is required.  For detailed and up-to-date instructions always see the patch README's 1466459.1 patch 13998727 888828.1 Rene Kundersma

    Read the article

  • Assertion failure when trying to write (INSERT, UPDATE) to sqlite database on iPhone.

    - by Mark McFarlane
    I have a really frustrating error that I've spent hours looking at and cannot fix. I can get data from my db no problem with this code, but inserting or updating gives me these errors: *** Assertion failure in +[Functions db_insert_answer:question_type:score:], /Volumes/Xcode/Kanji/Classes/Functions.m:129 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error inserting: db_insert_answer:question_type:score:' Here is the code I'm using: [Functions db_insert_answer:[[dict_question objectForKey:@"JISDec"] intValue] question_type:@"kanji_meaning" score:arc4random() % 100]; //update EF, Next_question, n here [Functions db_update_EF:[dict_question objectForKey:@"question"] EF:EF]; To call these functions: +(sqlite3_stmt *)db_query:(NSString *)queryText{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; NSLog(queryText); if (sqlite3_prepare_v2(database, [queryText UTF8String], -1, &statement, nil) == SQLITE_OK) { } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } sqlite3_close(database); return statement; } +(void)db_insert_answer:(int)obj_id question_type:(NSString *)question_type score:(int)score{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; char *errorMsg; char *update = "INSERT INTO Answers (obj_id, question_type, score, date) VALUES (?, ?, ?, DATE())"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, obj_id); sqlite3_bind_text(statement, 2, [question_type UTF8String], -1, NULL); sqlite3_bind_int(statement, 3, score); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error inserting: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Answer saved"); } +(void)db_update_EF:(NSString *)kanji EF:(int)EF{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; //NSLog(queryText); char *errorMsg; char *update = "UPDATE Kanji SET EF = ? WHERE Kanji = '?'"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, EF); sqlite3_bind_text(statement, 2, [kanji UTF8String], -1, NULL); } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error updating: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Update saved"); } +(sqlite3 *)get_db{ sqlite3 *database; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *copyFrom = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"/kanji_training.sqlite"]; if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB FILE ALREADY EXISTS"); } else { [fileManager copyItemAtPath:copyFrom toPath:[self dataFilePath] error:nil]; NSLog(@"COPIED DB TO DOCUMENTS BECAUSE IT DIDNT EXIST: NEW INSTALL"); } if (sqlite3_open([[self dataFilePath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); NSLog(@"FAILED TO OPEN DB"); } else { if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB PATH:"); //NSLog([self dataFilePath]); } } return database; } + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:@"kanji_training.sqlite"]; } I really can't work it out! Can anyone help me? Many thanks.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Unix ? Linux ????????? Oracle Database 11g Release 2 ? SAP ????????

    - by ?? ?
    US?Blog Oracle Database 11g Release 2 is SAP certified for Unix and Linux platforms. ?????????SAP??????Oracle Database 11g R2????????? ????UNIX???Linux???????????????? Linux x86???x86-64 AIX HP-UX IA64 Solaris SPARC???x64 ??? ?????????????????????????! Advanced Compression Option (table, RMAN backup, expdp, DG Network) Real Application Testing Oracle Database 11g Release 2 Database Vault Oracle Database 11g Release 2 RAC Advanced Encryption for tablespaces, RMAN backups, expdp, DG Network Direct NFS Deferred Segments Online Patching ????SAP???1398634 ??????????????????

    Read the article

  • Sharepoint bug database - any good?

    - by jkohlhepp
    We've been using Sharepoint as a poor man's bug tracking database for the last couple of projects that we did. No one is really happy with the solution so I'm looking for alternatives. I happened to stumble upon the Bug Database Template for Sharepoint. If it is halfway decent it might be a good choice for us since the transition would be smooth as the team is already used to Sharepoint. Anyone have any experience using this template? Any major problems? Any major missing features? Is there any documentation out there beyond that download page? Thanks for the help.

    Read the article

  • SQLite vs Firebird

    - by rwallace
    The scenario I'm looking at is "This program uses Postgres. Oh, you want to just use it single-user for the moment, and put off having to deal with installing a database server? Okay, in the meantime you can use it with the embedded single-user database." The question is then which embedded database is best. As I understand it, the two main contenders are SQLite and Firebird; so which is better? Criteria: Full SQL support, or as close as reasonably possible. Full text search. Easy to call from C# Locks, or allows you to lock, the database file to make sure nobody tries to run it multiuser and ends up six months down the road with intermittent data corruption in all their backups. Last but far from least, reliability. As I understand it, the disadvantages of SQLite are, No right outer join. Workaround: use left outer join instead. Not much integrity checking. Workaround: be really careful in the application code. No decimal numbers. Workaround: lots of aspirin. None of the above are showstoppers. Are there any others I'm missing? (I know it doesn't support some administrative and code-within-database SQL features, that aren't relevant for this kind of use case.) I don't know anything much about Firebird. What are its disadvantages?

    Read the article

  • Windows based Apache/PHP/Perl/database stacks

    - by GreenMatt
    Does anyone know if XAMPP's Perl also provides Tk support? Does anyone know of a similar stack which provides PostresSQL in addition to/instead of MySQL? I'd like this to run on Win 7 and XP. Edit: I do know I could install the other components I want "by hand", for example getting Tk from CPAN, I'd just prefer to get everything at one time if possible.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >