Search Results

Search found 27337 results on 1094 pages for 't sql'.

Page 829/1094 | < Previous Page | 825 826 827 828 829 830 831 832 833 834 835 836  | Next Page >

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Recommended ASP.NET Shared Hosting (USA)

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta ASP.NET frameworks support. Not a big deal to me. If you have alternative suggestions rather than your experience with discountasp, I'd like to know how their toolset is. Do you have complete control over your DB in terms of adding users, and same goes for the web app pool, etc.? Discountasp.net's control panel rocks. I don't want to loose the ability to at least control and add virtual directories, recycle my dedicated app pool myself, backup my sql database myself, through tools which is what discountasp does give you. I'd also want to know that the hoster at least gets the latest and greatest in terms of non-beta ASP.NET related frameworks available to its shared hosters.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Multiple pages PHP how????

    - by sandy
    Here I have a problem in this code .. I want to specify the 4 values in each page .. But I can not ... So far there is a problem, I think, in limit <?php if(!isset($_GET["page"])) { $page=1; include ("connect.php"); } else { $page= intval($_GET['page']); } $max=4; $from=($max*$page)- $max; $sql = mysql_query("select * from uploads,members,rooms where uploads.MemberID = members.MemberID and uploads.RoomID = rooms.RoomID and UploadCategory='L' order by UploadID desc limit $from,$max "); $num_sql = mysql_num_rows($sql); ?> <table width="400" border="0" cellspacing="1" cellpadding="0"> <tr> <td><form name="form1" method="post" action=""> <div align="center"> <table width="400" border="0" cellpadding="3" cellspacing="1" bgcolor="#CCCCCC"> <tr> <td bgcolor="#FFFFFF">&nbsp;</td> <td colspan="4" bgcolor="#FFFFFF"> <p align="center"><font size="4"> file</font></td> </tr> <tr> <td align="center" bgcolor="#FFFFFF">#</td> <td align="center" bgcolor="#FFFFFF"><strong>member</strong></td> <td align="center" bgcolor="#FFFFFF"><strong> Name</strong></td> </tr> </div> <?php //$sql1 = mysql_query("select * from book order by id desc limit $from,$max "); //$num_sql1 = mysql_num_rows($sql1); $pages=ceil($num_sql/$max); $num = mysql_num_rows($sql); //$result = mysql_query("SELECT * FROM book order by id"); while($row = mysql_fetch_array($sql)) { ?> <tr> <td align="center" bgcolor="#FFFFFF"><input name="checkbox[]" type="checkbox" id="checkbox[]" value="<?php echo $row['UploadID']; ?>"></td> <td bgcolor="#FFFFFF"> <?php echo $row[MemberName]. "<br />"; ?></td> <td bgcolor="#FFFFFF"> <?php echo $row[UpoadName]."<br />"; ?></td> </tr> <?php } ?> <tr> <td colspan="5" align="center" bgcolor="#FFFFFF"><input name="delete" type="submit" id="delete" value="delete"></td> </tr> <?php if($page>1) { $prev=$page-1; echo"<a href=".$PHP_SELF."?page=$prev\"></a>"; } for($i=1;$i<=$pages;$i++) { if($page==$i) echo" [$i] "; else { echo" <a href=".$PHP_SELF."?page=$i\">$i</a> "; } } if($page<$pages) { $next=$page+1; echo"<a href=".$PHP_SELF."?page=$next\"></a>"; } ?> </html> thanks alote

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • Is it a missing implementation with JPA implementation of hibernate??

    - by Jegan
    Hi all, On my way in understanding the transaction-type attribute of persistence.xml, i came across an issue / discrepency between hibernate-core and JPA-hibernate which looks weird. I am not pretty sure whether it is a missing implementation with JPA of hibernate. Let me post the comparison between the outcome of JPA implementation and the hibernate implementation of the same concept. Environment Eclipse 3.5.1 JSE v1.6.0_05 Hibernate v3.2.3 [for hibernate core] Hibernate-EntityManger v3.4.0 [for JPA] MySQL DB v5.0 Issue 1.Hibernate core package com.expt.hibernate.core; import java.io.Serializable; public final class Student implements Serializable { private int studId; private String studName; private String studEmailId; public Student(final String studName, final String studEmailId) { this.studName = studName; this.studEmailId = studEmailId; } public int getStudId() { return this.studId; } public String getStudName() { return this.studName; } public String getStudEmailId() { return this.studEmailId; } private void setStudId(int studId) { this.studId = studId; } private void setStudName(String studName) { this.studName = stuName; } private void setStudEmailId(int studEmailId) { this.studEmailId = studEmailId; } } 2. JPA implementaion of Hibernate package com.expt.hibernate.jpa; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "Student_Info") public final class Student implements Serializable { @Id @GeneratedValue @Column(name = "STUD_ID", length = 5) private int studId; @Column(name = "STUD_NAME", nullable = false, length = 25) private String studName; @Column(name = "STUD_EMAIL", nullable = true, length = 30) private String studEmailId; public Student(final String studName, final String studEmailId) { this.studName = studName; this.studEmailId = studEmailId; } public int getStudId() { return this.studId; } public String getStudName() { return this.studName; } public String getStudEmailId() { return this.studEmailId; } } Also, I have provided the DB configuration properties in the associated hibernate-cfg.xml [in case of hibernate core] and persistence.xml [in case of JPA (hibernate entity manager)]. create a driver and perform add a student and query for the list of students and print their details. Then the issue comes when you run the driver program. Hibernate core - output Exception in thread "main" org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.core.Student at org.hibernate.tuple.PojoInstantiator.instantiate(PojoInstantiator.java:84) at org.hibernate.tuple.PojoInstantiator.instantiate(PojoInstantiator.java:100) at org.hibernate.tuple.entity.AbstractEntityTuplizer.instantiate(AbstractEntityTuplizer.java:351) at org.hibernate.persister.entity.AbstractEntityPersister.instantiate(AbstractEntityPersister.java:3604) .... .... This exception is flashed when the driver is executed for the first time itself. JPA Hibernate - output First execution of the driver on a fresh DB provided the following output. DEBUG SQL:111 - insert into student.Student_Info (STUD_EMAIL, STUD_NAME) values (?, ?) 17:38:24,229 DEBUG SQL:111 - select student0_.STUD_ID as STUD1_0_, student0_.STUD_EMAIL as STUD2_0_, student0_.STUD_NAME as STUD3_0_ from student.Student_Info student0_ student list size == 1 1 || Jegan || [email protected] second execution of the driver provided the following output. DEBUG SQL:111 - insert into student.Student_Info (STUD_EMAIL, STUD_NAME) values (?, ?) 17:40:25,254 DEBUG SQL:111 - select student0_.STUD_ID as STUD1_0_, student0_.STUD_EMAIL as STUD2_0_, student0_.STUD_NAME as STUD3_0_ from student.Student_Info student0_ Exception in thread "main" javax.persistence.PersistenceException: org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.jpa.Student at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:614) at org.hibernate.ejb.QueryImpl.getResultList(QueryImpl.java:76) at driver.StudentDriver.main(StudentDriver.java:43) Caused by: org.hibernate.InstantiationException: No default constructor for entity: com.expt.hibernate.jpa.Student .... .... Could anyone please let me know if you have encountered this sort of inconsistency? Also, could anyone please let me know if the issue is a missing implementation with JPA-Hibernate? ~ Jegan

    Read the article

  • PHP: table structure

    - by A3efan
    I'm developing a website that has some audio courses, each course can have multiple lessons. I want to display each course in its own table with its different lessons. This is my sql statement: Table: courses id, title Table: lessons id, cid (course id), title, date, file $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; Can someone help me with the PHP code? This is the I code I have written: mysql_select_db($database_config, $config); mysql_query("set names utf8"); $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; $result = mysql_query($sql) or die(mysql_error()); while ($row = mysql_fetch_assoc($result)) { echo "<p><span class='heading1'>" . $row['course'] . "</span> </p> "; echo "<p class='datum'>Posted onder <a href='*'>*</a>, latest update on " . strftime("%A %d %B %Y %H:%M", strtotime($row['date'])); } echo "</p>"; echo "<class id='text'>"; echo "<p>...</p>"; echo "<table border: none cellpadding='1' cellspacing='1'>"; echo "<tr>"; echo "<th>Nr.</th>"; echo "<th width='450'>Lesso</th>"; echo "<th>Date</th>"; echo "<th>Download</th>"; echo "</tr>"; echo "<tr>"; echo "<td>" . $row['nr'] . "</td>"; echo "<td>" . $row['title'] . "</td>"; echo "<td>" . strftime("%d/%m/%Y", strtotime($row['date'])) . "</td>"; echo "<td><a href='audio/" . rawurlencode($row['file']) . "'>MP3</a></td>"; echo "</tr>"; echo "</table>"; echo "<br>"; } ?>

    Read the article

  • combobox with for each loop

    - by Mary
    Hi I am a student do anybody know after populating the combobox with the database values. How to display the same value in the text box. When I select a name in the combobox the same name should be displayed in the text box I am using seleted item. Here is the code. I am getting the error the following error using the foreach loop foreach statement cannot operate on variables of type 'object' because 'object' does not contain a public definition for 'GetEnumerator' using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.OleDb; namespace DBExample { public partial class Form1 : Form { private OleDbConnection dbConn; // Connectionn object private OleDbCommand dbCmd; // Command object private OleDbDataReader dbReader;// Data Reader object private Member aMember; private string sConnection; // private TextBox tb1; // private TextBox tb2; private string sql; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { try { // Construct an object of the OleDbConnection // class to store the connection string // representing the type of data provider // (database) and the source (actual db) sConnection = "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=c:member.mdb"; dbConn = new OleDbConnection(sConnection); dbConn.Open(); // Construct an object of the OleDbCommand // class to hold the SQL query. Tie the // OleDbCommand object to the OleDbConnection // object sql = "Select * From memberTable Order " + "By LastName , FirstName "; dbCmd = new OleDbCommand(); dbCmd.CommandText = sql; dbCmd.Connection = dbConn; // Create a dbReader object dbReader = dbCmd.ExecuteReader(); while (dbReader.Read()) { aMember = new Member (dbReader["FirstName"].ToString(), dbReader["LastName"].ToString(), dbReader["StudentId"].ToString(), dbReader["PhoneNumber"].ToString()); // tb1.Text = dbReader["FirstName"].ToString(); // tb2.Text = dbReader["LastName"].ToString(); // tb1.Text = aMember.X().ToString(); //tb2.Text = aMember.Y(aMember.ID).ToString(); this.comboBox1.Items.Add(aMember.FirstName.ToString()); // this.listBox1.Items.Add(aMember.ToString()); // MessageBox.Show(aMember.ToString()); // Console.WriteLine(aMember.ToString()); } dbReader.Close(); dbConn.Close(); } catch (System.Exception exc) { MessageBox.Show("show" + exc); } } private void DbGUI_Load(object sender, EventArgs e) { } private void comboBox1_SelectedIndexChanged(object sender, EventArgs e) { this.textBox1.Text = comboBox1.SelectedItem.ToString(); textBox2.Text = string.Empty; foreach (var item in comboBox1.SelectedItem) textBox2.Text += item.ToString(); } private void textBox2_TextChanged(object sender, EventArgs e) { } private void textBox1_TextChanged(object sender, EventArgs e) { } } }

    Read the article

  • PHP database selection issue

    - by Citroenfris
    I'm in a bit of a pickle with freshening up my PHP a bit, it's been about 3 years since I last coded in PHP. Any insights are welcomed! I'll give you as much information as I possibly can to resolve this error so here goes! Files config.php database.php news.php BLnews.php index.php Includes config.php - news.php database.php - news.php news.php - BLnews.php BLnews.php - index.php Now the problem with my current code is that the database connection is being made but my database refuses to be selected. The query I have should work but due to my database not getting selected it's kind of annoying to get any data exchange going! database.php <?php class Database { //------------------------------------------- // Connects to the database //------------------------------------------- function connect() { if (isset($dbhost) && isset($dbuser) && isset($dbpass)) { $con = mysql_connect($dbhost, $dbuser, $dbpass) or die("Could not connect: " . mysql_error()); } }// end function connect function selectDB() { if (isset($dbname) && isset($con)) { $selected_db = mysql_select_db($dbname, $con) or die("Could not select test DB"); } } } // end class Database ?> News.php <?php // include the config file and database class include 'config.php'; include 'database.php'; ... ?> BLnews.php <?php // include the news class include 'news.php'; // create an instance of the Database class and call it $db $db = new Database; $db -> connect(); $db->selectDB(); class BLnews { function getNews() { $sql = "SELECT * FROM news"; if (isset($sql)) { $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); } return $result; } ?> index.php <?php ... include 'includes/BLnews.php'; $blNews = new BLnews(); $news = $blNews->getNews(); ?> ... <?php while($row = mysql_fetch_array($news)) { echo '<div class="post">'; echo '<h2><a href="#"> ' . $row["title"] .'</a></h2>'; echo '<p class="post-info">Posted by <a href="#"> </a> | <span class="date"> Posted on <a href="#">' . $row["date"] . '</a></span></p>'; echo $row["content"]; echo '</div>'; } ?> Well this is pretty much everything that should get the information going however due to the mysql_error in $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); I can see the error and it says: Could not execute query. Reason: No database selected I honestly have no idea why it would not work and I've been fiddling with it for quite some time now. Help is most welcomed and I thank you in advance! Greets Lemon

    Read the article

  • Need help to properly remove duplicates in NHibernate

    - by Michael D. Kirkpatrick
    Here is the problem I am having. I have a database with over 100 records in it. I am paging through the data to get 9 results at a time. When I added a check to see if items are active, it caused the results to start doubling up. A little background: "Product" is the actual product line "ProductSkus" are the actual products that exist in the product line When there is more then 1 ProductSku within Product, it causes a duplicate entry to be returned. See the NHibernate Query below: result = this.Session.CreateCriteria<Model.Product>() .Add(Expression.Eq("IsActive", true)) .AddOrder(new Order("Name", true)) .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // This part of the query duplicates the products .CreateAlias("ProductSkus", "ProdSkus", JoinType.InnerJoin) .Add(Expression.Eq("ProdSkus.IsActive", true)) .CreateAlias("ProductToSubcategory", "ProdToSubcat") .CreateAlias("ProdToSubcat.ProductSubcategory", "ProdSubcat") .Add(Expression.Eq("ProdSubcat.ID", subCatId)) // This part takes out the duplicate products - Removes too many items... // Turns out that with .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // it gets 9 records back then the duplicates are removed. // Example: // Total Records over 100 // Max = 9 // 4 Duplicates removed // Yields 5 records when there should be 9 // Why??? This line is ran in NHibernate on the data after it has been extracted from the SQL server. .SetResultTransformer(new NHibernate.Transform.DistinctRootEntityResultTransformer()) .List<Model.Product>(); I added the DistinctRootEntityResultTransformer to clean up the duplicates. The problem is that it pulls 9 records back that contains duplicates. DistinctRootEntityResultTransformer then cleans up the duplicates in the 9 records. I am basically needing a distinct statement to be ran on the SQL server to begin with. However, distinct on SQL is not going to work since NHibernate by default wants to add every field from every table in the select part of the statement. I am only using the fields that belong to the root table to begin with (Model.Product). If I can tell NHibernate to not add the fields to the joined tables into the select part of the statement along with adding Distinct, it would work. I use NHibernare Profiler to see the actual query: SELECT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, prodskus1_.ID as ID373_0_, prodskus1_.Description as Descript2_373_0_, prodskus1_.PartNumber as PartNumber373_0_, prodskus1_.Price as Price373_0_, prodskus1_.IsKit as IsKit373_0_, prodskus1_.IsActive as IsActive373_0_, prodskus1_.IsFeaturedProduct as IsFeatur7_373_0_, prodskus1_.DateAdded as DateAdded373_0_, prodskus1_.Weight as Weight373_0_, prodskus1_.TimesViewed as TimesVi10_373_0_, prodskus1_.TimesOrdered as TimesOr11_373_0_, prodskus1_.ProductID as ProductID373_0_, prodskus1_.OverSizedBoxID as OverSiz13_373_0_, prodtosubc2_.ID as ID362_1_, prodtosubc2_.MasterSubcategory as MasterSu2_362_1_, prodtosubc2_.ProductID as ProductID362_1_, prodtosubc2_.ProductSubcategoryID as ProductS4_362_1_, prodsubcat3_.ID as ID352_2_, prodsubcat3_.Name as Name352_2_, prodsubcat3_.ProductCategoryID as ProductC3_352_2_, prodsubcat3_.ImageID as ImageID352_2_, prodsubcat3_.TriggerShow as TriggerS5_352_2_ FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc If I hand modify the query and run it directly on the SQL server I get the result set I want (I removed all the extra fields in the select section and added DISTINCT): SELECT DISTINCT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc The big question I now must ask is... What must I change in the NHibernate Query to ultimately get the exact same result? Thanks in advance.

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • Doesn't get the output in Java Database Connectivity

    - by Dooree
    I'm working on Java Database Connectivity through Eclipse IDE. I built a database through Ubuntu Terminal, and I need to connect and work with it. However, when I tried to run the following code, I don't get any error, but the following output is showed, anybody knows why I don't get the output from the code ? //STEP 1. Import required packages import java.sql.*; public class FirstExample { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost/EMP"; // Database credentials static final String USER = "username"; static final String PASS = "password"; public static void main(String[] args) { Connection conn = null; Statement stmt = null; try{ //STEP 2: Register JDBC driver Class.forName("com.mysql.jdbc.Driver"); //STEP 3: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL,USER,PASS); //STEP 4: Execute a query System.out.println("Creating statement..."); stmt = conn.createStatement(); String sql; sql = "SELECT id, first, last, age FROM Employees"; ResultSet rs = stmt.executeQuery(sql); //STEP 5: Extract data from result set while(rs.next()){ //Retrieve by column name int id = rs.getInt("id"); int age = rs.getInt("age"); String first = rs.getString("first"); String last = rs.getString("last"); //Display values System.out.print("ID: " + id); System.out.print(", Age: " + age); System.out.print(", First: " + first); System.out.println(", Last: " + last); } //STEP 6: Clean-up environment rs.close(); stmt.close(); conn.close(); }catch(SQLException se){ //Handle errors for JDBC se.printStackTrace(); }catch(Exception e){ //Handle errors for Class.forName e.printStackTrace(); }finally{ //finally block used to close resources try{ if(stmt!=null) stmt.close(); }catch(SQLException se2){ }// nothing we can do try{ if(conn!=null) conn.close(); }catch(SQLException se){ se.printStackTrace(); }//end finally try }//end try System.out.println("Goodbye!"); }//end main }//end FirstExample <ConnectionProperties> <PropertyCategory name="Connection/Authentication"> <Property name="user" required="No" default="" sortOrder="-2147483647" since="all"> The user to connect as </Property> <Property name="password" required="No" default="" sortOrder="-2147483646" since="all"> The password to use when connecting </Property> <Property name="socketFactory" required="No" default="com.mysql.jdbc.StandardSocketFactory" sortOrder="4" since="3.0.3"> The name of the class that the driver should use for creating socket connections to the server. This class must implement the interface 'com.mysql.jdbc.SocketFactory' and have public no-args constructor. </Property> <Property name="connectTimeout" required="No" default="0" sortOrder="9" since="3.0.1"> Timeout for socket connect (in milliseconds), with 0 being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'. </Property> ...

    Read the article

  • UITableview has problem reloading

    - by seelani
    Hi guys, I've kinda finished my application for a school project but have run into a major "bug". It's a account management application. I'm unable to insert a picture here so here's a link: http://i232.photobucket.com/albums/ee112/seelani/Screenshot2010-12-22atPM075512.png Here's the problem when i click on the plus sign, i push a nav controller to load another view to handle the adding and deleting of categories. When i add and return back to the view above, it doesn't update. It only updates after i hit the button on the right which is another view used to change some settings, and return back to the page. I did some research on viewWillAppear and such but I'm still confused to why it doesn't work properly. This problem is also affecting my program when i delete a category, and return back to this view it crashes cos the view has not reloaded successfully. I will get this error when deleting and returning to the view. "* Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSMutableArray objectAtIndex:]: index 4 beyond bounds [0 .. 3]'". [EDIT] Table View Code: @class LoginViewController; @implementation CategoryTableViewController @synthesize categoryTableViewController; @synthesize categoryArray; @synthesize accountsTableViewController; @synthesize editAccountTable; @synthesize window; CategoryMgmtTableController *categoryMgmtTableController; ChangePasswordView *changePasswordView; - (void) save_Clicked:(id)sender { /* UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Category Management" message:@"Load category management table view" delegate:self cancelButtonTitle: @"OK" otherButtonTitles:nil]; [alert show]; [alert release]; */ KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; categoryMgmtTableController = [[CategoryMgmtTableController alloc]initWithNibName:@"CategoryMgmtTable" bundle:nil]; [appDelegate.categoryNavController pushViewController:categoryMgmtTableController animated:YES]; } - (void) change_Clicked:(id)sender { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Change Password" message:@"Change password View" delegate:self cancelButtonTitle: @"OK" otherButtonTitles:nil]; [alert show]; [alert release]; KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; changePasswordView = [[ChangePasswordView alloc]initWithNibName:@"ChangePasswordView" bundle:nil]; [appDelegate.categoryNavController pushViewController:changePasswordView animated:YES]; /* KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; categoryMgmtTableController = [[CategoryMgmtTableController alloc]initWithNibName:@"CategoryMgmtTable" bundle:nil]; [appDelegate.categoryNavController pushViewController:categoryMgmtTableController animated:YES]; */ } #pragma mark - #pragma mark Initialization /* - (id)initWithStyle:(UITableViewStyle)style { // Override initWithStyle: if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. if ((self = [super initWithStyle:style])) { } return self; } */ -(void) initializeCategoryArray { sqlite3 *db= [KeyCryptAppAppDelegate getNewDBConnection]; KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; const char *sql = [[NSString stringWithFormat:(@"Select Category from Categories;")]cString]; const char *cmd = [[NSString stringWithFormat:@"pragma key = '%@' ", appDelegate.pragmaKey]cString]; sqlite3_stmt *compiledStatement; sqlite3_exec(db, cmd, NULL, NULL, NULL); if (sqlite3_prepare_v2(db, sql, -1, &compiledStatement, NULL)==SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) [categoryArray addObject:[NSString stringWithUTF8String:(char*) sqlite3_column_text(compiledStatement, 0)]]; } else { NSAssert1(0,@"Error preparing statement", sqlite3_errmsg(db)); } sqlite3_finalize(compiledStatement); } #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; [super viewDidLoad]; } - (void)viewWillAppear:(BOOL)animated { self.title = NSLocalizedString(@"Categories",@"Types of Categories"); categoryArray = [[NSMutableArray alloc]init]; [self initializeCategoryArray]; self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(save_Clicked:)] autorelease]; self.navigationItem.leftBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAction target:self action:@selector(change_Clicked:)] autorelease]; [super viewWillAppear:animated]; } - (void)viewDidAppear:(BOOL)animated { NSLog (@"view did appear"); [super viewDidAppear:animated]; } - (void)viewWillDisappear:(BOOL)animated { NSLog (@"view will disappear"); [super viewWillDisappear:animated]; } - (void)viewDidDisappear:(BOOL)animated { [categoryTableView reloadData]; NSLog (@"view did disappear"); [super viewDidDisappear:animated]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ #pragma mark - #pragma mark Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [self.categoryArray count]; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... NSUInteger row = [indexPath row]; cell.text = [categoryArray objectAtIndex:row]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; return cell; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:YES]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *selectedCategory = [categoryArray objectAtIndex:[indexPath row]]; NSLog (@"AccountsTableView.xib is called."); if ([categoryArray containsObject: selectedCategory]) { if (self.accountsTableViewController == nil) { AccountsTableViewController *aAccountsView = [[AccountsTableViewController alloc]initWithNibName:@"AccountsTableView"bundle:nil]; self.accountsTableViewController =aAccountsView; [aAccountsView release]; } NSInteger row =[indexPath row]; accountsTableViewController.title = [NSString stringWithFormat:@"%@", [categoryArray objectAtIndex:row]]; // This portion pushes the categoryNavController. KeyCryptAppAppDelegate *delegate = [[UIApplication sharedApplication] delegate]; [self.accountsTableViewController initWithTextSelected:selectedCategory]; KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; appDelegate.pickedCategory = selectedCategory; [delegate.categoryNavController pushViewController:accountsTableViewController animated:YES]; } } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [accountsTableViewController release]; [super dealloc]; } @end And the code that i used to delete rows(this is in a totally different tableview): - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source NSString *selectedCategory = [categoryArray objectAtIndex:indexPath.row]; [categoryArray removeObjectAtIndex:indexPath.row]; [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:YES]; [deleteCategoryTable reloadData]; //NSString *selectedCategory = [categoryArray objectAtIndex:indexPath.row]; sqlite3 *db= [KeyCryptAppAppDelegate getNewDBConnection]; KeyCryptAppAppDelegate *appDelegate = (KeyCryptAppAppDelegate *)[[UIApplication sharedApplication] delegate]; const char *sql = [[NSString stringWithFormat:@"Delete from Categories where Category = '%@';", selectedCategory]cString]; const char *cmd = [[NSString stringWithFormat:@"pragma key = '%@' ", appDelegate.pragmaKey]cString]; sqlite3_stmt *compiledStatement; sqlite3_exec(db, cmd, NULL, NULL, NULL); if (sqlite3_prepare_v2(db, sql, -1, &compiledStatement, NULL)==SQLITE_OK) { sqlite3_exec(db,sql,NULL,NULL,NULL); } else { NSAssert1(0,@"Error preparing statement", sqlite3_errmsg(db)); } sqlite3_finalize(compiledStatement); } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } }

    Read the article

  • graph with database

    - by Flip_novidade
    I need to make two graphs with data coming from the database. I do not know where I am going wrong. If someone can show me the correct way, or provide any examples. must be two graphs, a graph of a specific student another graph of all students thank you public class NotasBean { private Notas notas; private Notas selectedNotas; private List<Notas> filtroNotass; public Notas getNotas() { return notas; } public void setNotas(Notas notas) { this.notas = notas; } public Notas getSelectedNotas() { return selectedNotas; } public void setSelectedNotas(Notas selectedNotas) { this.selectedNotas = selectedNotas; } public List<Notas> getFiltroNotass() { return filtroNotass; } public void setFiltroNotass(List<Notas> filtroNotass) { this.filtroNotass = filtroNotass; } public void prepararAdicionarNotas(){ notas = new Notas(); } public void adicionarNotas(){ dao.NotasDao obj_dao = new dao.NotasDao(); obj_dao.save(notas); } } package dao; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Statement; import java.util.ArrayList; import java.util.List; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import model.Aluno; import model.Notas; import model.Notas; public class NotasDao { private Conexao obj_conexao; public NotasDao() { obj_conexao = new Conexao(); } public List<Notas> list() { List<Notas> array_registros = new ArrayList<Notas>(); try { String sql = "Select alu_in_ra, dis_st_sigla, ald_fl_p1, ald_fl_p2, ald_fl_p3, ald_fl_trab1, ald_fl_trab2 from cad_aluno_disciplina"; Statement comando_sql = (Statement) obj_conexao.getConexao() .createStatement(); ResultSet obj_result = comando_sql.executeQuery(sql); while (obj_result.next()) { Notas obj_notas = new Notas(); obj_notas.setAlura(obj_result.getInt("alu_in_ra")); obj_notas.setDiscsigla(obj_result.getString("dis_st_sigla")); obj_notas.setP1(obj_result.getInt("ald_fl_p1")); obj_notas.setP2(obj_result.getInt("ald_fl_p2")); obj_notas.setP3(obj_result.getInt("ald_fl_p3")); obj_notas.setTrb1(obj_result.getInt("ald_fl_trab1")); obj_notas.setTrb2(obj_result.getInt("ald_fl_trab2")); array_registros.add(obj_notas); } } catch (Exception e) { System.out.println("Erro no select" + e.getMessage()); } finally { obj_conexao.fecharConexao(); } return array_registros; } public void select(Aluno obj_aluno){ FacesContext mensagem = FacesContext.getCurrentInstance(); try{ String comando_sql = "Select alu_in_ra, dis_st_sigla, ald_fl_p1, ald_fl_p2, ald_fl_p3, ald_fl_trab1, ald_fl_trab2 from cad_aluno_disciplina where alu_in_ra=?"; PreparedStatement obj_sql = (PreparedStatement) obj_conexao.getConexao().prepareStatement(comando_sql); obj_sql.setInt(1, obj_aluno.getRa()); obj_sql.executeUpdate(); mensagem.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_WARN, "Erro ao selecionar aluno!","Snif")); }catch(Exception e){ mensagem.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_FATAL, "Erro na inclusão: "+e.getMessage()," Ocoreu o erro: "+e.getMessage())); }finally{ obj_conexao.fecharConexao(); } return; } }//fecha a classe package model; import java.io.Serializable; public class Notas implements Serializable{ private static final long serialVersionUID = 1L; private int alura; private String discsigla; private float p1; private float p2; private float p3; private float trb1; private float trb2; public Notas() { } public Notas (int alura, String discsigla, float p1, float p2, float p3, float trb1, float trb2){ super(); this.alura=alura; this.discsigla=discsigla; this.p1=p1; this.p2=p2; this.p3=p3; this.trb1=trb1; this.trb2=trb2; } public int getAlura() { return alura; } public void setAlura(int alura) { this.alura = alura; } public String getDiscsigla() { return discsigla; } public void setDiscsigla(String discsigla) { this.discsigla = discsigla; } public float getP1() { return p1; } public void setP1(float p1) { this.p1 = p1; } public float getP2() { return p2; } public void setP2(float p2) { this.p2 = p2; } public float getP3() { return p3; } public void setP3(float p3) { this.p3 = p3; } public float getTrb1() { return trb1; } public void setTrb1(float trb1) { this.trb1 = trb1; } public float getTrb2() { return trb2; } public void setTrb2(float trb2) { this.trb2 = trb2; } } <p:panel header="Grafico Notas Aluno" style="width: 550px"> <p:lineChart id="linear" value="#{notasDao.aluno.alura}" var="notas" xfield="#{notas.alura}" height="300px" width="500px" style="chartStyle"> <p:chartSeries label="Prova 1" value="#{notas.p1}" /> <p:chartSeries label="Prova 2" value="#{notas.p2}" /> <p:chartSeries label="Prova 3" value="#{notas.p3}" /> <p:chartSeries label="Trabalho 1" value="#{notas.trb1}" /> <p:chartSeries label="Trabalho 2" value="#{notas.trb2}" /> </p:lineChart> </p:panel> <p:panel header="Grafico Notas" style="width: 550px"> <p:lineChart id="linear" value="#{notasDao.natas}" var="notas" xfield="#{notas.p1}" height="300px" width="500px" style="chartStyle"> <p:chartSeries label="Prova 1" value="#{notas.p1}" /> <p:chartSeries label="Prova 2" value="#{notas.p2}" /> <p:chartSeries label="Prova 3" value="#{notas.p3}" /> <p:chartSeries label="Trabalho 1" value="#{notas.trb1}" /> <p:chartSeries label="Trabalho 2" value="#{notas.trb2}" /> </p:lineChart> </p:panel>

    Read the article

  • Finding the problem on a partially succeeded build

    - by Martin Hinshelwood
    Now that I have the Build failing because of a genuine bug and not just because of a test framework failure, lets see if we can trace through to finding why the first test in our new application failed. Lets look at the build and see if we can see why there is a red cross on it. First, lets open that build list. On Team Explorer Expand your Team Project Collection | Team Project and then Builds. Double click the offending build. Figure: Opening the Build list is a key way to see what the current state of your software is.   Figure: A test is failing, but we can now view the Test Results to find the problem      Figure: You can quite clearly see that the test has failed with “The device is not ready”. To me the “The Device is not ready” smacks of a System.IO exception, but it passed on my local computer, so why not on the build server? Its a FaultException so it is most likely coming from the Service and not the client, so lets take a look at the client method that the test is calling: bool IProfileService.SaveDefaultProjectFile(string strComputerName) { ProjectFile file = new ProjectFile() { ProjectFileName = strComputerName + "_" + System.DateTime.Now.ToString("yyyyMMddhhmmsss") + ".xml", ConnectionString = "persist security info=False; pooling=False; data source=(local); application name=SSW.SQLDeploy.vshost.exe; integrated security=SSPI; initial catalog=SSWSQLDeployNorthwindSample", DateCreated = System.DateTime.Now, DateUpdated = System.DateTime.Now, FolderPath = @"C:\Program Files\SSW SQL Deploy\SampleData\", IsComplete=false, Version = "1.3", NewDatabase = true, TimeOut = 5, TurnOnMSDE = false, Mode="AutomaticMode" }; string strFolderPath = "D:\\"; //LocalSettings.ProjectFileBasePath; string strFileName = strFolderPath + file.ProjectFileName; try { using (FileStream fs = new FileStream(strFileName, FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(ProjectFile)); using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(fs)) { serializer.WriteObject(writer, file); } } } catch (Exception ex) { //TODO: Log the exception throw ex; return false; } return true; } Figure: You can see on lines 9 and 18 that there are calls being made to specific folders and disks. What is wrong with this code? What assumptions mistakes could the developer have made to make this look OK: That every install would be to “C:\Program Files\SSW SQL Deploy” That every computer would have a “D:\\” That checking in code at 6pm because the had to go home was a good idea. lets solve each of these problems: We are in a web service… lets store data within the web root. So we can call “Server.MapPath(“~/App_Data/SSW SQL Deploy\SampleData”) instead. Never reference an explicit path. If you need some storage for your application use IsolatedStorage. Shelve your code instead. What else could have been done? Code review before check-in – The developer should have shelved their code and asked another dev to look at it. Use Defensive programming – Make sure that any code that has the possibility of failing has checks. Any more options? Let me know and I will add them. What do we do? The correct things to do is to add a Bug to the backlog, but as this is probably going to be fixed in sprint, I will add it directly to the sprint backlog. Right click on the failing test Select “Create Work Item | Bug” Figure: Create an associated bug to add to the backlog. Set the values for the Bug making sure that it goes into the right sprint and Area. Make your steps to reproduce as explicit as possible, but “See test” is valid under these circumstances.   Figure: Add it to the correct Area and set the Iteration to the Area name or the Sprint if you think it will be fixed in Sprint and make sure you bring it up at the next Scrum Meeting. Note: make sure you leave the “Assigned To” field blank as in Scrum team members sign up for work, you do not give it to them. The developer who broke the test will most likely either sign up for the bug, or say that they are stuck and need help. Note: Visual Studio has taken care of associating the failing test with the Bug. Save… Technorati Tags: WCF,MSTest,MSBuild,Team Build 2010,Team Test 2010,Team Build,Team Test

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • Chart Control in ASP.Net 4 – Second Part

    - by sreejukg
      Couple of weeks before, I have written an introduction about the chart control available in .Net framework. In that article, I explained the basic usage of the chart control with a simple example. You can read that article from the url http://weblogs.asp.net/sreejukg/archive/2010/12/31/getting-started-with-chart-control-in-asp-net-4-0.aspx. In this article I am going to demonstrate how one can generate various types of charts that can be generated easily using the ASP.Net chart control. Let us recollect the data sample we were working in the previous sample. The following is the data I used in the previous article. id SaleAmount SalesPerson SaleType SaleDate CompletionStatus (%) 1 1000 Jack Development 2010-01-01 100 2 300 Mills Consultancy 2010-04-14 90 3 4000 Mills Development 2010-05-15 80 4 2500 Mike eMarketting 2010-06-15 40 5 1080 Jack Development 2010-07-15 30 6 6500 Mills Consultancy 2010-08-24 65 In this article I am going to demonstrate various graphical reports generated from this data with the help of chart control. The following are the reports I am going to generate 1. Representation of share of Sales by each Sales person. 2. Representation of share of sales data according to sale type 3. Representation of sales progress over time period I am going to demonstrate how to bind the chart control programmatically. In order to facilitate this, I created an aspx page named “SalesAnalysis.Aspx” to my project. In the page I added the following controls 1. Dropdownlist control – with id ddlAnalysisType, user will use this to choose the type of chart they want to see. 2. A Button control – with id btnSubmit , by clicking this button, the chart based on the dropdownlist selection will be shown to the user 3. A label Control – with id lblMessage, to display the message to the user, initially this will ask the user to select an option and click on the button. 4. Chart control – with id chrtAnalysis, by default, I set visible = false so that during the page load the chart will be hidden to the users. The following is the initial output of the page. Generating chart for salesperson share Now from Visual Studio, I have double clicked on the button; it created the event handler btnSubmit_Click. In the button Submit event handler, I am using a switch case to execute the corresponding SQL statement and bind it to the chart control. The below is the code for generating the sales person share chart using a pie chart. The above code produces the following output The steps for creating the above chart can be summarized as follows. You specify a chart area, then a series and bind the chart to some x and y values. That is it. If you want to control the chart size and position, you can set the properties for the ChartArea.Position element. For e.g. in the previous code, after instantiating the chart area, setting the below code will give you a bigger pie chart. c.Position.Width = 100; c.Position.Height = 100; The width and height values are in percentage. In this case the chart will be generated by utilizing all the width and height of the chart object. See the output updated with the width and height set to 100% each. Generate Chart for sales type share Now for generating the chart according to the sales type, you just need to change the SQL query and x and y values of the chart. The Sql query used is “SELECT SUM(saleAmount) amount, SaleType from SalesData group by SaleType” and the X-Value is amount and Y-Values is SaleType. s.XValueMember = "SaleType"; s.YValueMembers = "amount"; After modifying the above code with these, the following output is generated. Generate Chart for sales progress over time period For generating the progress of sale chart against sales amount / period, line chart is the ideal tool. In order to facilitate the line chart, you can use Chart Type as System.Web.UI.DataVisualization.Charting.SeriesChartType.Line. Also we need to retrieve the amount and sales date from the data source. I have used the following query to facilitate this. “SELECT SaleAmount, SaleDate FROM SalesData” The output for the line chart is as follows Now you have seen how easily you can build various types of charts. Chart control is an excellent one that helps you to bring business intelligence to your applications. What I demonstrated in only a small part of what you can do with the chart control. Refer http://msdn.microsoft.com/en-us/library/dd456632.aspx for further reading. If you want to get the project files in zip format, post your email below. Hope you enjoyed reading this article.

    Read the article

  • LINQ und ArcObjects

    - by Marko Apfel
    LINQ und ArcObjects Motivation LINQ1 (language integrated query) ist eine Komponente des Microsoft .NET Frameworks seit der Version 3.5. Es erlaubt eine SQL-ähnliche Abfrage zu verschiedenen Datenquellen wie SQL, XML u.v.m. Wie SQL auch, bietet LINQ dazu eine deklarative Notation der Problemlösung - d.h. man muss nicht im Detail beschreiben wie eine Aufgabe, sondern was überhaupt zu lösen ist. Das befreit den Entwickler abfrageseitig von fehleranfälligen Iterator-Konstrukten. Ideal wäre es natürlich auf diese Möglichkeiten auch in der ArcObjects-Programmierung mit Features zugreifen zu können. Denkbar wäre dann folgendes Konstrukt: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; bzw. dessen Äquivalent als Lambda-Expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); Dazu muss ein entsprechender Provider zu Verfügung stehen, der die entsprechende Iterator-Logik managt. Dies ist leichter als man auf den ersten Blick denkt - man muss nur die gewünschten Entitäten als IEnumerable<IFeature> liefern. (Anm.: nicht wundern - die Methoden GetValue() und ToDouble() habe ich nebenbei als Erweiterungsmethoden deklariert.) Im Hintergrund baut LINQ selbständig eine Zustandsmaschine (state machine)2 auf deren Ausführung verzögert ist (deferred execution)3 - d.h. dass erst beim tatsächlichen Anfordern von Entitäten (foreach, Count(), ToList(), ..) eine Instanziierung und Verarbeitung stattfindet, obwohl die Zuweisung schon an ganz anderer Stelle erfolgte. Insbesondere bei mehrfacher Iteration durch die Entitäten reibt man sich bei den ersten Debuggings verwundert die Augen wenn der Ausführungszeiger wie von Geisterhand wieder in die Iterator-Logik springt. Realisierung Eine ganz knappe Logik zum Konstruieren von IEnumerable<IFeature> lässt sich mittels Durchlaufen eines IFeatureCursor realisieren. Dazu werden die einzelnen Feature mit yield ausgegeben. Der einfachen Verwendung wegen, habe ich die Logik in eine Erweiterungsmethode GetFeatures() für IFeatureClass aufgenommen: public static IEnumerable GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } Damit kann man sich nun ganz einfach die IEnumerable<IFeature> erzeugen lassen: IEnumerable features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); Etwas aufpassen muss man bei der Verwendung des "Recycling-Cursors". Nach einer verzögerten Ausführung darf im selben Kontext nicht erneut über die Features iteriert werden. In diesem Fall wird nämlich nur noch der Inhalt des letzten (recycelten) Features geliefert und alle Features sind innerhalb der Menge gleich. Kritisch würde daher das Konstrukt largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); weil ToList() schon einmal durch die Liste iteriert und der Cursor somit einmal durch die Features bewegt wurde. Die Erweiterungsmethode ForEach liefert dann immer dasselbe Feature. In derartigen Situationen darf also kein Cursor mit Recycling verwendet werden. Ein mehrfaches Ausführen von foreach ist hingegen kein Problem weil dafür jedes Mal die Zustandsmaschine neu instanziiert wird und somit der Cursor neu durchlaufen wird – das ist die oben schon erwähnte Magie. Ausblick Nun kann man auch einen Schritt weiter gehen und ganz eigene Implementierungen für die Schnittstelle IEnumerable<IFeature> in Angriff nehmen. Dazu müssen nur die Methode und das Property zum Zugriff auf den Enumerator ausprogrammiert werden. Im Enumerator selbst veranlasst man in der Reset()-Methode das erneute Ausführen der Suche – dazu übergibt man beispielsweise ein entsprechendes Delegate in den Konstruktur: new FeatureEnumerator( _featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); und ruft dieses beim Reset auf: public void Reset() {     _featureCursor = _resetCursor(_t); } Auf diese Art und Weise können Enumeratoren für völlig verschiedene Szenarien implementiert werden, die clientseitig restlos identisch nach obigen Schema verwendet werden. Damit verschmelzen Cursors, SelectionSets u.s.w. zu einer einzigen Materie und die Wiederverwendbarkeit von Code steigt immens. Obendrein lässt sich ein IEnumerable in automatisierten Unit-Tests sehr einfach mocken - ein großer Schritt in Richtung höherer Software-Qualität.4 Fazit Nichtsdestotrotz ist Vorsicht mit diesen Konstrukten in performance-relevante Abfragen geboten. Dadurch dass im Hintergrund eine Zustandsmaschine verwalten wird, entsteht einiges an Overhead dessen Verarbeitung zusätzliche Zeit kostet - ca. 20 bis 100 Prozent. Darüber hinaus ist auch das Arbeiten ohne Recycling schnell ein Performance-Gap. Allerdings ist deklarativer LINQ-Code viel eleganter, fehlerfreier und wartungsfreundlicher als das manuelle Iterieren, Vergleichen und Aufbauen einer Ergebnisliste. Der Code-Umfang verringert sich erfahrungsgemäß im Schnitt um 75 bis 90 Prozent! Dafür warte ich gerne ein paar Millisekunden länger. Wie so oft muss abgewogen werden zwischen Wartbarkeit und Performance - wobei für mich Wartbarkeit zunehmend an Priorität gewinnt. Zumeist ist sowieso nicht der Code sondern der Anwender die Bremse im Prozess. Demo-Quellcode support.esri.de   [1] Wikipedia: LINQ http://de.wikipedia.org/wiki/LINQ [2] Wikipedia: Zustandsmaschine http://de.wikipedia.org/wiki/Endlicher_Automat [3] Charlie Calverts Blog: LINQ and Deferred Execution http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx [4] Clean Code Developer - gelber Grad/Automatisierte Unit Tests http://www.clean-code-developer.de/Gelber-Grad.ashx#Automatisierte_Unit_Tests_8

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Interview with Ronald Bradford about MySQL Connect

    - by Keith Larson
    Ronald Bradford,  an Oracle ACE Director has been busy working with  database consulting, book writing (EffectiveMySQL) while traveling and speaking around the world in support of MySQL. I was able to take some of his time to get an interview on this thoughts about theMySQL Connect conference. Keith Larson: What where your thoughts when you heard that Oracle was going to provide the community the MySQL Conference ?Ronald Bradford: Oracle has already been providing various different local community events including OTN Tech Days and  MySQL community days. These are great for local regions both in the US and abroad.  In previous years there has been an increase of content at Oracle Open World, however that benefits the Oracle community far more then the MySQL community.  It is good to see that Oracle is realizing the benefit in providing a large scale dedicated event for the MySQL community that includes speakers from the MySQL development teams, invested companies in the ecosystem and other community evangelists.I fully expect a successful event and look forward to hopefully seeing MySQL Connect at the upcoming Brazil and Japan OOW conferences and perhaps an event on the East Coast.Keith Larson: Since you are part of the content committee, what did you think of the submissions that were received during call for papers?Ronald Bradford: There was a large number of quality submissions to the number of available presentation sessions. As with the previous years as a committee member for the annual MySQL conference, there is always a large variety of common cornerstone MySQL features as well as new products and upcoming companies sharing their MySQL experiences. All of the usual major players in the ecosystem will in presenting at MySQL Connect including Facebook, Twitter, Yahoo, Continuent, Percona, Tokutek, Sphinx and Amazon to name a few.  This is ensuring the event will have a large number of quality speakers and a difficult time in choosing what to attend. Keith Larson: What sessions do you look forwarding to attending? Ronald Bradford: As with most quality conferences you can only be in one place at one time, so with multiple tracks per session it is always difficult to decide. The continued work and success with MySQL Cluster, and with a number of sessions I am sure will be popular. The features that interest me the most are around the optimizer, where there are several sessions on new features, and on the importance of backups. There are three presentations in this area to choose from.Keith Larson: Are you going to cover any of the content in your books at your MySQL Connect sessions?Ronald Bradford: I will be giving two presentations at MySQL Connect. The first will include the techniques available for creating better indexes where I will be touching on some aspects of the first Effective MySQL book on Optimizing SQL Statements.  In my second presentation from experiences of managing 500+ AWS MySQL instances, I will be touching on areas including SQL tuning, backup and recovery and scale out with replication.   These are the key topics of the initial books in the Effective MySQL series that focus on performance, scalability and business continuity.  The books however cover a far greater amount of detail then can be presented in a 1 hour session. Keith Larson: What features of MySQL 5.6 do you look forward to the most ?Ronald Bradford: I am very impressed with the optimizer trace feature. The ability to see exposed information is invaluable not just for MySQL 5.6, but to also apply information discerned for optimizing SQL statements in earlier versions of MySQL.  Not everybody understands that it is easy to deploy a MySQL 5.6 slave into an existing topology running an older version if MySQL for evaluation of many new features.  You can use the new mysqlbinlog streaming feature for duplicating master binary logs on an older version with a MySQL 5.6 slave.  The improvements in instrumentation in the Performance Schema are exciting.   However, as with my upcoming Replication Techniques in Depth title, that will be available for sale at MySQL Connect, there are numerous replication features, some long overdue with provide significant management benefits. Crash Save Slaves, Global transaction Identifiers (GTID)  and checksums just to mention a few.Keith Larson: You have been to numerous conferences, what would you recommend for people at the conference? Ronald Bradford: Make the time to meet and introduce yourself to the speakers that cover the topics that most interest you. The MySQL ecosystem has a very strong community.  The relationships you build with presenters, developers and architects in MySQL can be invaluable, however they are created over time. Get to know these people, interact with them over time.  This is the opportunity to learn more then just the content from a 1 hour session. Keith Larson: Any additional tips to handling the long hours ? Ronald Bradford: Conferences can be hard, especially with all the post event drinking.  This is a two day event and I am sure will include additional events on Friday and Saturday night so come well prepared, and leave work behind. Take the time to learn something new.   You can always catchup on sleep later. Keith Larson: Thank you so much for taking some time to do this I look forward to seeing you at the MySQL Connect conference.  Please stay tuned here for more updates on MySQL. 

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

< Previous Page | 825 826 827 828 829 830 831 832 833 834 835 836  | Next Page >