Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 229/1252 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Entering Content Into A MySQL Database Via A Form

    - by ThatMacLad
    I've been working on creating a form that submits content into my database but I decided that rather than using a drop down menu to select the date I'd rather use a textfield. I was wondering what changes I will need to make to my table creation file. <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); $sql = "CREATE TABLE php_blog ( id int(20) NOT NULL auto_increment, timestamp int(20) NOT NULL, title varchar(255) NOT NULL, entry longtext NOT NULL, PRIMARY KEY (id) )"; $result = mysql_query($sql) or print ("Can't create the table 'php_blog' in the database.<br />" . $sql . "<br />" . mysql_error()); mysql_close(); if ($result != false) { echo "Table 'php_blog' was successfully created."; } ?> It's the timestamp that I need to edit to enter in via a textfield. The Title and Entry are currently being entered via that method anyway.

    Read the article

  • Locking database edit by key name

    - by Will Glass
    I need to prevent simultaneous edits to a database field. Users are executing a push operation on a structured data field, so I want to sequence the operations, not simply ignore one edit and take the second. Essentially I want to do synchronized(key name) { push value onto the database field } and set up the synchronized item so that only one operation on "key name" will occur at a time. (note: I'm simplifying, it's not always a simple push). A crude way to do this would be a global synchronization, but that bottlenecks the entire app. All I need to do is sequence two simultaneous writes with the same key, which is rare but annoying occurrence. This is a web-based java app, written with Spring (and using JPA/MySQL). The operation is triggered by a user web service call. (the root cause is when a user sends two simultaneous http requests with the same key). I've glanced through the Doug Lea/Josh Bloch/et al Concurrency in Action, but don't see an obvious solution. Still, this seems simple enough I feel there must be an elegant way to do this.

    Read the article

  • Parse files in directory/insert in database

    - by jakesankey
    Hey there, Here is my dillema... I have a directory full of .txt comma delimited files arranged as shown below. What I want to do is to import each of these into a SQL or SQLite database, appending each one below the last. (1 table)... I am open to C# or VB scripting and just not sure how to accomplish this. I want to only extract and import the data starting BELOW the 'Feat. Type,Feat. Name, etc' line. These are stored in a \mynetwork\directory\stats folder on my network drive. Ideally I will be able to add functionality that will make the software/script know not to re-add the file to the database once it has already done so as well. Any guidance or tips is appreciated! $$ SAMPLE= $$ FIXTURE=- $$ OPERATOR=- $$ INSPECTION PROCESS=CMM #4 $$ PROCESS OPERATION=- $$ PROCESS SEQUENCE=- $$ TRIAL=- Feat. Type,Feat. Name,Value,Actual,Nominal,Dev.,Tol-,Tol+,Out of Tol.,Comment Point,_FF_PLN_A_1,X,-17.445,-17.445,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Y,-195.502,-195.502,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Z,32.867,33.500,-0.633,-0.800,0.800,, Point,_FF_PLN_A_2,X,-73.908,-73.908,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Y,-157.957,-157.957,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Z,32.792,33.500,-0.708,-0.800,0.800,, Point,_FF_PLN_A_3,X,-100.180,-100.180,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Y,-142.797,-142.797,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Z,32.768,33.500,-0.732,-0.800,0.800,, Point,_FF_PLN_A_4,X,-160.945,-160.945,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Y,-112.705,-112.705,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Z,32.719,33.500,-0.781,-0.800,0.800,, Point,_FF_PLN_A_5,X,-158.096,-158.096,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Y,-73.821,-73.821,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Z,32.756,33.500,-0.744,-0.800,0.800,, Point,_FF_PLN_A_6,X,-195.670,-195.670,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Y,-17.375,-17.375,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Z,32.767,33.500,-0.733,-0.800,0.800,, Point,_FF_PLN_A_7,X,-173.759,-173.759,0.000,-999.000,999.000,, Point,_FF_PLN_A_7,Y,14.876,14.876,0.000,-999.000,999.000,,

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • UITableView not getting populated with sqlite database column

    - by Ingila Ejaz
    Here's the code of a method I used to populate UITableView with the contents of a column in sqlite database. However, this is giving no errors at run-time but still does not give any data in UITableView. If anyone could help, it'll be highly appreciated. - (void)viewDidLoad{ [super viewDidLoad]; self.title = @"Strings List"; UITableViewCell *cell; NSString *docsDir; NSArray *dirPaths; sqlite3_stmt *statement; // Get the documents directory dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); docsDir = [dirPaths objectAtIndex:0]; // Build the path to the database file databasePath = [[NSString alloc]initWithString: [docsDir stringByAppendingPathComponent:@"strings.sqlite"]]; NSFileManager *filemgr = [NSFileManager defaultManager]; if ([filemgr fileExistsAtPath: databasePath ] == NO) { const char *dbpath = [databasePath UTF8String]; if (sqlite3_open(dbpath, &contactDB) == SQLITE_OK) { NSString * query = @"SELECT string FROM strings"; const char * stmt = [query UTF8String]; if (sqlite3_prepare_v2(contactDB, stmt, -1,&statement, NULL)==SQLITE_OK){ if (sqlite3_step(statement) == SQLITE_ROW) { NSString *string = [[NSString alloc]initWithUTF8String:(const char*)sqlite3_column_text(statement, 0)]; cell.textLabel.text = string; } } else NSLog(@"Failed"); sqlite3_finalize(statement); } sqlite3_close(contactDB); } }

    Read the article

  • query sql database for specific value in vb.net

    - by user2952298
    I am trying to convert VBA code to vb.net, im having trouble trying to search the database for a specific value around an if statement. any suggestions would be greatly appriciated. thedatabase is called confirmation, type is the column and email is the value im looking for. could datasets work? Function SendEmails() As Boolean Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim intResponse As Integer Dim confirmation As New ADODB.Recordset Dim details As New ADODB.Recordset On Error GoTo Err_Execute Dim MyConnObj As New ADODB.Connection Dim cn As New ADODB.Connection() MyConnObj.Open( _ "Provider = sqloledb;" & _ "Server=myserver;" & _ "Database=Email_Text;" & _ "User Id=bla;" & _ "Password=bla;") confirmation.Open("Confirmation_list", MyConnObj) details.Open("MessagesToSend", MyConnObj) If details.EOF = False Then confirmation.MoveFirst() Do While Not confirmation.EOF If confirmation![Type] = "Email" Then ' Create the Outlook session. objOutlook = CreateObject("Outlook.Application") ' Create the message. End IF

    Read the article

  • Merging ILists to bind on datagridview to avoid using a database view

    - by P.Bjorklund
    In the form we have this where IntaktsBudgetsType is a poorly named enum that only specifies wether to populate the datagridview after customer or product (You do the budgeting either after product or customer) private void UpdateGridView() { bs = new BindingSource(); bs.DataSource = intaktsbudget.GetDataSource(this.comboBoxKundID.Text, IntaktsBudgetsType.PerKund); dataGridViewIntaktPerKund.DataSource = bs; } This populates the datagridview with a database view that merge the product, budget and customer tables. The logic has the following method to get the correct set of IList from the repository which only does GetTable<T>.ToList<T> public IEnumerable<IntaktsBudgetView> GetDataSource(string id, IntaktsBudgetsType type) { IList<IntaktsBudgetView> list = repository.SelectTable<IntaktsBudgetView>(); switch (type) { case IntaktsBudgetsType.PerKund: return from i in list where i.kundId == id select i; case IntaktsBudgetsType.PerProdukt: return from i in list where i.produktId == id select i; } return null; } Now I don't want to use a database view since that is read-only and I want to be able to perform CRUD actions on the datagridview. I could build a class that acts as a wrapper for the whole thing and bind the different table values to class properties but that doesn't seem quite right since I would have to do this for every single thing that requires "the merge". Something pretty important (and probably basic) is missing the the thought process but after spending a weekend on google and in books I give up and turn to the SO community.

    Read the article

  • Connecting a django application to a drupal database?

    - by Hans
    I have a 3 - 4000 nodes in a drupal 6 installation on mysql and want to access these data through my django application. I have used manage.py inspectdb to get a skeleton of a model structure. I guess that there are good/historical reasons for drupal's database schemes, but find that there are some hard to understand structure and that there are some challenges in applying django models on the database. Some experiences this far are: node and node revision are intertwined and I solved this by using a OneToOneField (don't need the versions). This meens that the node's body gets accessible through node.vid.body, but it works. Foreign keys need to define the proper db_column to sort out the primary keys. Terms need to use an intermediary table with ManyToManyField.through. Drupal stores both the original and the thumbnailed/resized versions of any image as files in the files table. Does anyone have experiences in accessing drupal data in django? Are there better solution to for example the node <- node revision relationship? Drupal stores time/dates as unix-style timestamps in integerfields. Any recommendations? How about time zones?

    Read the article

  • uploading database file in assets not returning a record

    - by Alexander
    I have a problem with a database file not being read I have added the database file in assets called mydb but when i run my code it says its not being located. It is calling this toast Toast.makeText(this, "No contact found", Toast.LENGTH_LONG).show(); This is being called because no records are being returned. This is an example form Android Application Development book. public class DatabaseActivity extends Activity { /** Called when the activity is first created. */ TextView quest, response1, response2; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView quest = (TextView) findViewById(R.id.quest); try { String destPath = "/data/data/" + getPackageName() + "/databases/MyDB"; File f = new File(destPath); if (!f.exists()) { CopyDB( getBaseContext().getAssets().open("mydb"), new FileOutputStream(destPath)); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } DBAdapter db = new DBAdapter(this); //---get a contact--- db.open(); Cursor c = db.getContact(2); if (c.moveToFirst()) DisplayContact(c); else Toast.makeText(this, "No contact found", Toast.LENGTH_LONG).show(); db.close(); } public void CopyDB(InputStream inputStream, OutputStream outputStream) throws IOException { //---copy 1K bytes at a time--- byte[] buffer = new byte[1024]; int length; while ((length = inputStream.read(buffer)) > 0) { outputStream.write(buffer, 0, length); } inputStream.close(); outputStream.close(); } public void DisplayContact(Cursor c) { quest.setText(String.valueOf(c.getString(1))); //quest.setText(String.valueOf("this is a text string")); } }

    Read the article

  • formmail to email in database

    - by aarontb
    Here's what I need to setup...and I am not well versed in PHP/SQL...but this is what I'm trying to do. On the New User database, I will have a section where they can have information sent to their phone using the provider's default e-mail to text msg (i.e. [email protected].) New User: Input number: ______________(2225551212 format, no hyphens, etc.) Select Provider: (Drop-down menu with proper @provider.ext...) Then the formmail, when sent, if for specific user will get ($phone".@."$provider); or something like that and send a preset message like: $user."requested information on ".$product."on".$date." at ".$time."."; Is this possible? The $user, $product, $date, $time all are generated directly from the most recent input page for a database. Is this possible? Please help!

    Read the article

  • sqlite database help

    - by mamila
    Hi all i have some questions regarding database in android. 1. in my project, I would like to create a database with single table. i would also like to insert data on this table from word or excel document. is it possible? if so please how can i do it? some code snippets would be of great help. 2. i have four different activities which i would like them to interact with this single table. the first is the main activity and its purpose is just to launch the other three activities according to user choice. so it has no interaction with the db. but the other three activities will read from the table whenever called. can you please tell me how to call the dbhelper on these activities. am kind of unable to do this.and am currently creating one db per each activity which is not the optimal way. any help is really appreciated and would be better if very soon tahnks

    Read the article

  • ASP.net MVC3 howto edit sql database

    - by user1662380
    MovieStoreEntities MovieDb = new MovieStoreEntities(); public ActionResult Edit(int id) { //var EditMovie1 = MovieDb AddMovieModel EditMovie = (from M in MovieDb.Movies from C in MovieDb.Categories where M.CategoryId == C.Id where M.Id == id select new AddMovieModel { Name = M.Name, Director = M.Director, Country = M.Country, categorie = C.CategoryName, Category = M.CategoryId }).FirstOrDefault(); //AddMovieModel EditMovie1 = MovieDb.Movies.Where(m => m.Id == id).Select(m => new AddMovieModel {m.Id }).First(); List<CategoryModel> categories = MovieDb.Categories .Select(category => new CategoryModel { Category = category.CategoryName, id = category.Id }) .ToList(); ViewBag.Category = new SelectList(categories, "Id", "Category"); return View(EditMovie); } // // POST: /Default1/Edit/5 [HttpPost] public ActionResult Edit(AddMovieModel Model2) { List<CategoryModel> categories = MovieDb.Categories .Select(category => new CategoryModel { Category = category.CategoryName, id = category.Id }) .ToList(); ViewBag.Category = new SelectList(categories, "Id", "Category"); if (ModelState.IsValid) { //MovieStoreEntities model = new MovieStoreEntities(); MovieDb.SaveChanges(); return View("Thanks2", Model2); } else return View(); } This is the Code that I have wrote to edit Movie details and update database in the sql server. This dont have any compile errors, But It didnt update sql server database.

    Read the article

  • Trying to Insert Text to Access Database with ExecuteNonQueryin VB

    - by user3673701
    I'm working on this feature were i save text from a textbox on my form to a access database file. With the click on my "store" button I want to save the text in the corresponding textbox fields to the fields in my access database for searching and retrieving purposes. Im doing this in VB The problem I run into is the INSERT TO error with my "store" button Any help or insight as to why im getting error would be greatly appreciated. here is my code: Dim con As System.Data.OleDb.OleDbConnection Dim cmd As System.Data.OleDb.OleDbCommand Dim dr As System.Data.OleDb.OleDbDataReader Dim sqlStr As String Private Sub btnStore_Click(sender As Object, e As EventArgs) Handles btnStore.Click con = New System.Data.OleDb.OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Users\Eric\Documents\SPC1.2.accdb") con.Open() MsgBox("Initialized") sqlStr = "INSERT INTO Master FADV Calll info (First Name, Last Name, Phone Number, Email, User Id) values (" & txtFirstName.Text & " ' ,'" & txtLastName.Text & "','" & txtPhone.Text & "', '" & txtEmail.Text & "', '" & txtUserID.Text & "', '" & txtOtherOptionTxt.Text & "','" & txtCallDetail.Text & "');" cmd = New OleDb.OleDbCommand(sqlStr, con) cmd.ExecuteNonQuery() <------- **IM RECIVING THE ERROR HERE** MsgBox("Stored") ''pass ther parameter as sq; query, con " connection Obj)" con.Close()'

    Read the article

  • Storing Number in an integer from sql Database

    - by ar31an
    i am using database with table RESUME and column PageIndex in it which type is number in database but when i want to store this PageIndex value to an integer i get exception error Specified cast is not valid. here is the code string sql; string conString = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=D:\\Deliverable4.accdb"; protected OleDbConnection rMSConnection; protected OleDbDataAdapter rMSDataAdapter; protected DataSet dataSet; protected DataTable dataTable; protected DataRow dataRow; on Button Click sql = "select PageIndex from RESUME"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql, rMSConnection); dataSet = new DataSet("pInDex"); rMSDataAdapter.Fill(dataSet, "RESUME"); dataTable = dataSet.Tables["RESUME"]; int pIndex = (int)dataTable.Rows[0][0]; rMSConnection.Close(); if (pIndex == 0) { Response.Redirect("Create Resume-1.aspx"); } else if (pIndex == 1) { Response.Redirect("Create Resume-2.aspx"); } else if (pIndex == 2) { Response.Redirect("Create Resume-3.aspx"); } } i am getting error in this line int pIndex = (int)dataTable.Rows[0][0];

    Read the article

  • Exception thrown when creating database

    - by Bob
    For some reason, I can't get embedded firebird sql to work on Windows using C#/.NET. Here's my code: string BuildConnectionString() { FbConnectionStringBuilder builder = new FbConnectionStringBuilder(); builder.DataSource = "localhost"; builder.UserID = "SYSDBA"; builder.Password = "masterkey"; builder.Database = "database.fdb"; builder.ServerType = FbServerType.Embedded; return builder.ConnectionString; } private void OnConnectClicked(object sender, EventArgs e) { string cString = BuildConnectionString(); FbConnection.CreateDatabase( cString ); FbConnection connection = new FbConnection( cString ); connection.Open(); //CreateTable(); //FillListView(); connection.Close(); } When I call FbConnection.CreateDatabase, I get the following exception: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) I'm very new to SQL and Firebird in general, so I'm not sure how to resolve this issue. Anyone?

    Read the article

  • PHP & MySQL storing multiple values in a database.

    - by R.I.P.coalMINERS
    I'm basically a front end designer and new to PHP and MySQL but I want a user to be able to store multiple names and there meanings in a MySQL database table named names using PHP I will dynamically create form fields with JQuery every time a user clicks on a link so a user can enter 1 to 1,000,000 different names and there meanings which will be stored in a table called names. All I want to know is how can I store multiple names and there meanings in a database and then display them back to the user for them to edit or delete using PHP MySQL? I created the MySQL table already. CREATE TABLE names ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, userID INT NOT NULL, name VARCHAR(255) NOT NULL, meaning VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); Here is the HTML. <form method="post" action="index.php"> <ul> <li><label for="name">Name: </label><input type="text" name="name" id="name" /></li> <li><label for="meaning">Meaning: </label><input type="text" name="meaning" id="meaning" /></li> <li><input type="submit" name="submit" value="Save" /> <input type="hidden" name="submit" value="true" /></li> </ul> </form>

    Read the article

  • Associate activity with database ID

    - by Mohit Deshpande
    I have a main ListView that is based on an adapter from my database. Each database id is "assigned" to an Activity via the ListView. And in my AndroidManifest, each activity has an intent filter with a custom action. Now with this, I have had to create this class: public final class ActivityLauncher { private ActivityLauncher() { } public static void launch(Context c, int id) { switch(id) { case 1: Intent intent = new Intent(); intent.setAction(SomeActivity.ACTION_SOMEACTIVITY); c.startActivity(intent); break; case 2: ... break; ... } } private static void st(Context context, String action) { Intent intent = new Intent(); intent.setAction(action); context.startActivity(intent); } } So I have to manually create a new case for the switch statement. This would get troublesome if I have to rearrange or delete an id. Is there any way to get around this?

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • creating xml from database

    - by Prady
    hi, I am creating an xml from salesforce database, Everything works fine except when there is a & in the data which is been fetched. <apex:page contenttype="text/xml" > controller="Test2ab" > <data > wiki-section="Timeline"> <apex:repeat > value="{!lsttask}" var="e" > <event > start="{!e.ActivityDate}" title= > "{!e.Subject}"> <apex:outputText > value="{!e.Subject}" /> </event> > </apex:repeat> </data></apex:page> and in the controller i am just querying > lsttask =[Select OwnerId,WhoId,Status,Subject,ActivityDate from Task where Status = 'Completed' Order By ActivityDate Desc]; How can i use an escape for the value retrieved from the database Thanks Prady

    Read the article

  • Storing data from database [mysql_num_rows]

    - by user1717305
    So I have this code to pass items from database to my order table. When I'm echoing the session. The session variable contains something so there's no problem with that. But when I echo those variables under numrows, it only shows nothing. Is there something wrong? <?php error_reporting(E_ALL ^ E_NOTICE); session_start(); require("connect.php"); $UserID = $_SESSION['CustNum']; $UserN = $_SESSION['UserName']; $ProdGTotal = $_SESSION['ProdGTotal']; $queryord = mysql_query("SELECT * FROM customer WHERE UserName = '$UserN'"); $numrows = mysql_num_rows($queryord); if(numrows == 1){ $row = mysql_fetch_assoc($queryord)or die ('Unable to run query:'.mysql_error()); // fetch associated: get function from a query for a database $dbstreet = $row['Street']; $dhousenum = $row['HouseNum']; $dbcnum = $row['CelNum']; $dbarea = $row['Area']; $dbbuilding = $row['Building']; $dbcity = $row['City']; $dbpnum = $row['PhoneNum']; $dbfname = $row['FName']; $dblname = $row['LName']; } else die(mysql_error()); $query4=mysql_query("INSERT INTO orderdetails VALUES ('', '$UserID', Now(), '$dbhousenum', '$dbstreet', '$dbarea', '$dbbuilding', '$dbcity', '$dbfname', '$dblname', '$dbcnum', '$dbpnum', '$ProdGTotal')",$connect); if ($query4){ header("location:index.php"); } else die(mysql_error()); ?>

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • CSV engine on MySQL server

    - by Jeff
    I don't think that this is a programming question so I am going to ask it here - Reading the book high performance mysql, I read about the CSV engine. The paragraph says: The CSV engine can treat comma-separated values (CSV) files as table, but it does not support indexes on them. This engine lets you copy files in and out of the database while the server is running. If you export a CSV file from a spreadsheet and save it in the MySQL server's data directory, the server can read it immediately. Similary, if you write data to a CSV table, an external program can read it right away. CSV tables are especially useful as a data interchange format and for certain kinds of logging. What I get from this paragraph is that I can copy a .CSV file into the data directory of database, and it should show as a table that is able to be read from. However, whenever I copy a test .csv file into the directory, it does not appear as a table. I can't access it. I am using MySQL 5.5 also Does anyone know why this is not working, or what I am doing wrong? Thanks

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >