Search Results

Search found 10424 results on 417 pages for 'persisted column'.

Page 367/417 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • rails: has_many :through validation?

    - by ramonrails
    Rails 2.1.0 (Cannot upgrade for now due to several constraints) I am trying to achieve this. Any hints? A project has many users through join model A user has many projects through join model Admin class inherits User class. It also has some Admin specific stuff. Admin like inheritance for Supervisor and Operator Project has one Admin, One supervisor and many operators. Now I want to 1. submit data for project, admin, supervisor and operator in a single project form 2. validate all and show errors on the project form. Project has_many :projects_users ; has_many :users, :through => :projects_users User has_many :projects_users ; has_many :projects, :through => :projects_users ProjectsUser = :id integer, :user_id :integer, :project_id :integer, :user_type :string ProjectUser belongs_to :project, belongs_to :user Admin < User # User has 'type:string' column for STI Supervisor < User Operator < User Is the approach correct? Any and all suggestions are welcome.

    Read the article

  • Show last 4 table entries mysql php

    - by user272899
    I have a movie database Kind of like a blog and I want to display the last 4 created entries. I have a column in my table for timestamp called 'dateadded'. Using this code how would I only display the 4 most recent entries to table <?php //connect to database mysql_connect($mysql_hostname,$mysql_user,$mysql_password); @mysql_select_db($mysql_database) or die("<b>Unable to connect to specified database</b>"); //query databae $query = "SELECT * FROM movielist"; $result=mysql_query($query) or die('Error, insert query failed'); $row=0; $numrows=mysql_num_rows($result); while($row<$numrows) { $id=mysql_result($result,$row,"id"); $imgurl=mysql_result($result,$row,"imgurl"); $imdburl=mysql_result($result,$row,"imdburl"); ?> <div class="moviebox rounded"><a href="http://<?php echo $domain; ?>/viewmovie?movieid=<?php echo $id; ?>" rel="facebox"> <img src="<?php echo $imgurl; ?>" /> <form method="get" action=""> <input type="text" name="link" class="link" style="display:none" value="http://us.imdb.com/Title?<?php echo $imdburl; ?>"/> </form> </a></div> <?php $row++; } ?>

    Read the article

  • How to implement an ID field on a POCO representing an Identity field in MS SQL?

    - by Dr. Zim
    If I have a Domain Model that has an ID that maps to a SQL identity column, what does the POCO look like that contains that field? Candidate 1: Allows anyone to set and get the ID. I don't think we want anyone setting the ID except the Repository, from the SQL table. public class Thing { public int ID {get;set;} } Candidate 2: Allows someone to set the ID upon creation, but we won't know the ID until after we create the object (factory creates a blank Thing object where ID = 0 until we persist it). How would we set the ID after persisting? public class Thing { public Thing () : This (ID: 0) {} public Thing (int ID) { this.ID = ID } private int _ID; public int ID { get { return this.ID;}; } Candidate 3: Methods to set ID? Somehow we would need to allow the Repository to set the ID without allowing the consumer to change it. Any ideas? Is this barking up the wrong tree? Do we send the object to the Repository, save it, throw it away, then create a new object from the loaded version and return that as a new object?

    Read the article

  • User preferences using SQL and JavaScript

    - by Shyam
    Hi, I am using Server Side JavaScript - yes, I am actually using Server Side JavaScript. To complexify things even more, I use Oracle as a backend database (10g). With some crazy XSLT and mutant-like HTML generation, I can build really fancy web forms - yes, I am aware of Rails and other likewise frameworks and I choose the path of horror instead. I have no JQuery or other fancy framework at my disposal, just plain ol' JavaScript that should be supported by the underlying engine called Mozilla Rhino. Yes, it is insane and I love it. So, I have a bunch of tables at my disposal and some of them are filled with associative keys that link to values. As I am a people pleaser, I want to add some nifty user-preference driven solutions. My users have all an unique user_id and this user_id is available during the entire session. My initial idea is to have a user preference table, where I have "three" columns: user_id, feature and pref_string. Using a delimiter, such as : or - (haven't thought about a suitable one yet), I could like store a bunch of preferences as a list and store its elements inside an array using the .split-method (similar like the PHP-explode function). The feature column could be like the table name or some identifier for the "feature" i want to link preferences too. I hate hardcoding objects, especially as I want to be able to back these up and reuse this functionality application-wide. Of course I would love better ideas, just keep in mind I cannot just add a library that easily. These preferences could be like "joined" to the table, so I can query it and use its values. I hope it doesn't sounds too complex, because well.. its basically something really simple I need. Thanks!

    Read the article

  • Separating columnName and Value in C#

    - by KungfuPanda
    hi, I have a employee object as shown below class emp { public int EmpID { get; set; } public string EmpName { get; set; } public int deptID { get; set; } } I need to create a mapping either in this class or a different class to map the properties with column name of my SQL for eg. EmpdID="employeeID" EmpName="EmployeeName" deptID="DepartmentID" When from my asp.net page when I create the employee class and pass it to a function: for eg: emp e=new emp(); e.EmpID=1; e.EmpName="tommy"; e.deptID=10; When the emp object is populated and passed to the buildValues function it should return array of ComumnName(e.g.employeeID):Value(e.g.1),EmployeeName:tommy,DepartmentID:10) string[] values=buildValues(emp); public string[] buildValues(emp e) { string[] values=null; return values; } I have 2 questions: 1. Where do I specify the mappings 2. How do I use the mappings in my buildValues function shown above and build the values string array. I would really appreciate if you can help me with this

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Problem with WHERE columnName = Data in MySQL query in C#

    - by Ryan Sullivan
    I have a C# webservice on a Windows Server that I am interfacing with on a linux server with PHP. The PHP grabs information from the database and then the page offers a "more information" button which then calls the webservice and passes in the name field of the record as a parameter. So i am using a WHERE statement in my query so I only pull the extra fields for that record. I am getting the error: System.Data.SqlClient.SqlException:Invalid column name '42' Where 42 is the value from the name field from the database. my query is string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; I do not know if it is a problem with my query or something is wrong with the database, but here is the rest of my code for reference. NOTE: this all works perfectly when I grab all of the records, but I only want to grab the record that I ask my webservice for. public class ktvService : System.Web.Services.WebService { [WebMethod] public string moreInfo(string show) { string connectionStr = "MyConnectionString"; string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; SqlConnection conn = new SqlConnection(connectionStr); SqlDataAdapter da = new SqlDataAdapter(selectStr, conn); DataSet ds = new DataSet(); da.Fill(ds, "tableName"); DataTable dt = ds.Tables["tableName"]; DataRow theShow = dt.Rows[0]; string response = "Name: " + theShow["name"].ToString() + "Cast: " + theShow["castNotes"].ToString() + " Trivia: " + theShow["triviaNotes"].ToString(); return response; } }

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • libpcap read packet size

    - by spicyramen
    I started to write an application which will read RTP/H.264 video packets from an existing .pcap file, I need to read the packet size. I tried to use packet-len or header-len, but it never displays the right number of bytes for packets (I'm using wireshark to verify packet size - under Length column). How to do it? This is part of my code: while (packet = pcap_next(handle,&header)) { u_char *pkt_ptr = (u_char *)packet; struct ip *ip_hdr = (struct ip *)pkt_ptr; //point to an IP header structure struct pcap_pkthdr *pkt_hdr =(struct pcap_pkthdr *)packet; unsigned int packet_length = pkt_hdr->len; unsigned int ip_length = ntohs(ip_hdr->ip_len); printf("Packet # %i IP Header length: %d bytes, Packet length: %d bytes\n",pkt_counter,ip_length,packet_length); Packet # 0 IP Header length: 180 bytes, Packet length: 104857664 bytes Packet # 1 IP Header length: 52 bytes, Packet length: 104857600 bytes Packet # 2 IP Header length: 100 bytes, Packet length: 104857600 bytes Packet # 3 IP Header length: 100 bytes, Packet length: 104857664 bytes Packet # 4 IP Header length: 52 bytes, Packet length: 104857600 bytes Packet # 5 IP Header length: 100 bytes, Packet length: 104857600 bytes Another option I tried is to use: pkt_ptr- I get: read_pcapfile.c:67:43: error: request for member ‘len’ in something not a structure or union

    Read the article

  • MVC : Checkboxes generated using JavaScript not appearing in FormCollection on postback

    - by Andy Evans
    I took over another project (written by one contractor, modified by another and now it's not working) written using MVC/C# where a view that has a table (see below) is dynamically populated using JSON/Javascript - the first column of which is a checkbox. View (spark view engine) <table id='component_list' name='component_list' cellpadding='0' border='0' cellspacing='0'> <thead> <tr> <th>&nbsp;</th> <th>Component</th> <th>Component Type</th> <th>Evenflo Part #</th> <th>Supplier Part #</th> <th>Supplier</th> <th>Requirement</th> <th>Location</th> <th>Region</th> </tr> </thead> <tbody> </tbody> </table> When the page is rendered, I look at the source for the page and do not see the table data (I wouldn't expect to see this). However, when the form is posted back, controller, the FormCollection is empty. Supposedly this had been working before the last contractor got their hands on it - which is another post all together. My goal right now is having the checkboxes in the FormCollection. Any suggestions would be greatly appreciated. Thanks,

    Read the article

  • Association and model data saving problem

    - by Zhlobopotam
    Developing with cakephp 1.3 (latest from github). There are 2 models bind with hasAndBelongsToMany: documents and tags. Document can have many tags in other words. I've add a new document submitting form there user can enter a list of tags separated with commas (new tag will be added, if not exist already). I looked at cakephp bakery 2.0 source code on github and found the solution. But it seems that something is wrong. class Document extends AppModel { public $hasAndBelongsToMany = array('Tag'); public function beforeSave($options = array()) { if (isset($this->data[$this->alias]['tags']) && !empty($this- >data[$this->alias]['tags'])) { $tagIds = $this->Tag->saveDocTags($this->data[$this->alias] ['tags']); unset($this->data[$this->alias]['tags']); $this->data[$this->Tag->alias][$this->Tag->alias] = $tagIds; } return true; } } class Tag extends AppModel { public $hasAndBelongsToMany = array ('Document'); public function saveDocTags($commalist = '') { if ($commalist == '') return null; $tags = explode(',',$commalist); if (empty($tags)) return null; $existing = $this->find('all', array( 'conditions' => array('title' => $tags) )); $return = Set::extract($existing,'/Tag/id'); if (sizeof($existing) == sizeof($tags)) { return $return; } $existing = Set::extract($existing,'/Tag/title'); foreach ($tags as $tag) { if (!in_array($tag, $existing)) { $this->create(array('title' => $tag)); $this->save(); $return[] = $this->id; } } return $return; } } So, new tags creation works well but document model can't save association data and tells: SQL Error: 1054: Unknown column 'Array' in 'field list' Query: INSERT INTO documents (title, content, shortnfo, date, status) VALUES ('Document with tags', '', '', Array, 1) Any ideas how to solve this problem?

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • UML Modelling in C++Builder 2010 Professional

    - by Gordon Brandly
    I'd like to do some basic class diagram UML models in the Pro version of C++Builder 2010. Embarcadero has a C++Builder Features Matrix document, one line of which says "UML Code Visualization – at any time, get a UML model view of your source code" and has a check in the "Professional" column of that table -- I assume this means it should be available to me. Yet, when I open an existing project and do a View | Model View, there's nothing in the Model View window. The only diagram I can find is on the Graph tab of the C++ Class Explorer. I wouldn't call that a UML diagram myself -- is that what Embarcadero is referring to? Embarcadero's table shows that many UML diagrams are not available in Pro, but it looks to me like Class Diagrams should be available. Other lines in that same table indicate that both "Full two-way class diagrams with synchronization between code and diagrams" and "Diagram hyper-linking and annotations" are also supposed to be available in Pro. The Class Explorer graph is one-way only as far as I can tell, so I hope they're referring to something else I haven't been able to find so far. Thanks for any insight into this.

    Read the article

  • Oracle command hangs when using view for "WHILE x IN..." subquery

    - by Calvin Fisher
    I'm working on a web service that fetches data from an oracle data source in chunks and passes it back to an indexing/search tool in XML format. I'm the C#/.NET guy, and am kind of fuzzy on parts of Oracle. Our Oracle team gave us the following script to run, and it works well: SELECT ROWID, [columns] FROM [table] WHERE ROWID IN ( SELECT ROWID FROM ( SELECT ROWID FROM [table] WHERE ROWID > '[previous_batch_last_rowid]' ORDER BY ROWID ) WHERE ROWNUM <= 10000 ) ORDER BY ROWID 10,000 rows is an arbitrary but reasonable chunk size and ROWID is sufficiently unique for our purposes to use as a UID since each indexing run hits only one table at a time. Bracketed values are filled in programmatically by the web service. Now we're going to start adding views to the indexing, each of which will union a few separate tables. Since ROWID would no longer function as a unique identifier, they added a column to the views (VIEW_UNIQUE_ID) that concatenates the ROWIDs from the component tables to construct a UID for each union. But this script does not work, even though it follows the same form as the previous one: SELECT VIEW_UNIQUE_ID, [columns] FROM [view] WHERE VIEW_UNIQUE_ID IN ( SELECT VIEW_UNIQUE_ID FROM ( SELECT VIEW_UNIQUE_ID FROM [view] WHERE ROWID > '[previous_batch_last_view_unique_id]' ORDER BY VIEW_UNIQUE_ID ) WHERE ROWNUM <= 10000 ) ORDER BY VIEW_UNIQUE_ID It hangs indefinitely with no response from the Oracle server. I've waited 20+ minutes and the SQLTools dialog box indicating a running query remains the same, with no progress or updates. I've tested each subquery independently and each works fine and takes a very short amount of time (<= 1 second), so the view itself is sound. But as soon as the inner two SELECT queries are added with "WHERE VIEW_UNIQUE_ID IN...", it hangs. Why doesn't this query work for views? In what important way are they not interchangeable here?

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • Virgin STI Help

    - by Mutuelinvestor
    I am working on a horse racing application and I'm trying to utilize STI to model a horse's connections. A horse's connections is comprised of his owner, trainer and jockey. Over time, connections can change for a variety of reasons: The horse is sold to another owner The owner switches trainers or jockey The horse is claimed by a new owner As it stands now, I have model this with the following tables: horses connections (join table) stakeholders (stakeholder has three sub classes: jockey, trainer & owner) Here are my clases and associations: class Horse < ActiveRecord::Base has_one :connection has_one :owner_stakeholder, :through => :connection has_one :jockey_stakeholder, :through => :connection has_one :trainer_stakeholder, :through => :connection end class Connection < ActiveRecord::Base belongs_to :horse belongs_to :owner_stakeholder belongs_to :jockey_stakeholder belongs_to :trainer_stakeholder end class Stakeholder < ActiveRecord::Base has_many :connections has_many :horses, :through => :connections end class Owner < Stakeholder # Owner specific code goes here. end class Jockey < Stakeholder # Jockey specific code goes here. end class Trainer < Stakeholder # Trainer specific code goes here. end One the database end, I have inserted a Type column in the connections table. Have I modeled this correctly. Is there a better/more elegant approach. Thanks in advance for you feedback. Jim

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • Retrieve a TextBox element dinamically created and Focus it

    - by user335444
    Hi, I have a collection (VariableValueCollection) of custom type VariableValueViewModel objects binded with a ListView. WPF Follow: <ListView ItemsSource="{Binding VariableValueCollection}" Name="itemList"> <ListView.Resources> <DataTemplate DataType="{x:Type vm:VariableValueViewModel}"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="180"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox TabIndex="{Binding Path=Index, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}" Grid.Column="0" Name="tbValue" Focusable="True" LostFocus="tbValue_LostFocus" GotFocus="tbValue_GotFocus" KeyDown="tbValue_KeyDown"> <TextBox.Text> <Binding Path="Value" UpdateSourceTrigger="PropertyChanged" Mode="TwoWay"> <Binding.ValidationRules> <ExceptionValidationRule></ExceptionValidationRule> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> </Grid> </DataTemplate> </ListView.Resources> </ListView> My Goal is to add a new row when I press "enter" on last row, and Focus the new row. To do that, I check that row is the last row and add a new row in that case. But I don't know how to focus the new TextBox... Here the KeyPressed method: private void tbValue_KeyDown(object sender, KeyEventArgs e) { if (e.Key == System.Windows.Input.Key.Enter) { DependencyObject obj = itemList.ContainerFromElement((sender as TextBox)); int index = itemList.ItemContainerGenerator.IndexFromContainer(obj); if( index == (VariableValueCollection.Count - 1) ) { // Create a VariableValueViewModel object and add to collection. In binding, that create a new list item with a new TextBox ViewModel.AddNewRow(); // How to set cursor and focus last row created? } } } Thank's in advance...

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >