Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 70/216 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Zend_Db Enum Values [Closed]

    - by scopus
    I find this solution $metadata = $result->getTable()->info('metadata'); echo $metadata['Continent']['DATA_TYPE']; Hi, I want to get enum values in Zend_Db. My Code: $select = $this->select(); $result = $select->fetchAll(); print_r($result->getTable()); Output: Example Object ( [_name] => country [query] => Zend_Db_Table_Select Object ( [_info:protected] => Array ( [schema] => [name] => country [cols] => Array ( [0] => Code [1] => Continent ) [primary] => Array ( [1] => Code ) [metadata] => Array ( [Continent] => Array ( [SCHEMA_NAME] => [TABLE_NAME] => country [COLUMN_NAME] => Continent [COLUMN_POSITION] => 3 [DATA_TYPE] => enum('Asia','Europe','North America','Africa','Oceania','Antarctica','South America') [DEFAULT] => Asia [NULLABLE] => [LENGTH] => [SCALE] => [PRECISION] => [UNSIGNED] => [PRIMARY] => [PRIMARY_POSITION] => [IDENTITY] => ) I see enum values in data_type but i don't get this values. How can get data_type?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • problem in case of window service

    - by prateeksaluja20
    Hello friends, i made a windows service & add project installer.in which only contain this code. System.Diagnostics.Process.Start(@"C:\Windows\system32\notepad.exe"); inside the timer tick event & interval is 60 sec.i just wanted to try to run Windows service. 1st-serviceProcessInstaller1 i have been changed its account setting as local system. 2nd-serviceInstaller1 in this case i have been changed its start up type as Automatic. then i create a setup add another project then right click add project output then add primary output then press ok. then go to Right click on project-view-custom Action-right click on Install-Add custom Action-select Application folder & add primary output.the same thing done for all the remaining options like commit,rollback,uninstall. after that i build the setup it build succesfully then i install the setup it installed properly into program file n create one .exe file n one Instalfile. but problem is that when i search the service into "services.msc" the service is not there. means service is not showing there.i tried but not getting the ans.plz help me to solve this problem.

    Read the article

  • Updating a Database from DataBound Controls

    - by Avatar_Squadron
    Hi. I'm currently creating a WinForm in VB.NET bound to an access database. Basically what i have are two forms: one is a search form used to search the database, and the other is a details form. You run a search on the searchForm and it returns a list of Primary Keys and a few other identifying values. You then double click on the entry you want to view, and it loads the details form. The Details form has a collection of databound controls to display the data: mostly text boxes and checkboxs. The way i've set it up is i used the UI to build the form and then set the DataBindings Property of each control to "TblPropertiesBindingSource - " where value name is one of the values in the table (such as PropertyID or HasWoodFloor). Then, when you double click an entry in the searchform, I handle the event by parsing the Primary Key (PropertyID) out of the selected row and then storing this to the details form: Note: Detail is the details form that is opened to display the info Private Sub propView_CellDoubleClick(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles propView.CellDoubleClick Dim detail As frmPropertiesDetail = New frmPropertiesDetail detail.id = propView.Rows(e.RowIndex).Cells(0).Value detail.Show() End Sub Then, upon loading the details form, it set's the filter on the BindSource as such: TblPropertiesBindingSource.Filter() = "PropertyID=" & id This works great so far. All the controls on the details form will display the correct info. The problem is updating changes. Scenario: If i have the user load the details for say, property 10001, it will show a description in a textBox named descriptionBox which is identical to the value of the description value of for that entry in the database. I want the user to then be able to change the text of the text box (which they can currently do) and click the save button (saveBut) and have the form update all the values in the controls to the database. Theorectically, it should do this as the controls are DataBound, thus i can avoid writing code that tells each entry in the database row to take the value of the aligned control. I've tried calleding PropertiesTableAdapter.Update(PropertiesBindingSource.DataSource), but that doesnt seem to do it.

    Read the article

  • Immutability and shared references - how to reconcile?

    - by davetron5000
    Consider this simplified application domain: Criminal Investigative database Person is anyone involved in an investigation Report is a bit of info that is part of an investigation A Report references a primary Person (the subject of an investigation) A Report has accomplices who are secondarily related (and could certainly be primary in other investigations or reports These classes have ids that are used to store them in a database, since their info can change over time (e.g. we might find new aliases for a person, or add persons of interest to a report) If these are stored in some sort of database and I wish to use immutable objects, there seems to be an issue regarding state and referencing. Supposing that I change some meta-data about a Person. Since my Person objects immutable, I might have some code like: class Person( val id:UUID, val aliases:List[String], val reports:List[Report]) { def addAlias(name:String) = new Person(id,name :: aliases,reports) } So that my Person with a new alias becomes a new object, also immutable. If a Report refers to that person, but the alias was changed elsewhere in the system, my Report now refers to the "old" person, i.e. the person without the new alias. Similarly, I might have: class Report(val id:UUID, val content:String) { /** Adding more info to our report */ def updateContent(newContent:String) = new Report(id,newContent) } Since these objects don't know who refers to them, it's not clear to me how to let all the "referrers" know that there is a new object available representing the most recent state. This could be done by having all objects "refresh" from a central data store and all operations that create new, updated, objects store to the central data store, but this feels like a cheesy reimplementation of the underlying language's referencing. i.e. it would be more clear to just make these "secondary storable objects" mutable. So, if I add an alias to a Person, all referrers see the new value without doing anything. How is this dealt with when we want to avoid mutability, or is this a case where immutability is not helpful?

    Read the article

  • Storing data in a MySQL database using MySQL & PHP

    - by comma
    I'm new to PHP and MySQL and I'm trying to store a users entered data from the following fields $skill, $experience, $years which a user can also add additional fields of $skill, $experience, $years if needed so in instead of 1 of each field there might be multiples of each field. I was wondering how can I store the fields in my MySQL database using PHP and MySQL? I have the following script but I know its wrong. can some one help me fix the script listed below? Here is the PHP and MySQL code. $skill = serialize($_POST['skill']); $experience = serialize($_POST['experience']); $years = serialize($_POST['years']); for (($s = 0; $s < count($skill); $s++) && ($x = 0; $x < count($experience); $x++) && ($g = 0; $g < count($years); $g++)){ $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $query1 = "INSERT INTO learned_skills (skill, experience, years) VALUES ('" . $skill[$s] . "', '" . $experience[$x] . "', '" . $years[$g] . "')"; if (!mysqli_query($mysqli, $query1)) { print mysqli_error($mysqli); return; } } Here is my MySQL table. CREATE TABLE learned_skills ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, skill TEXT NOT NULL, experience TEXT NOT NULL, years INT NOT NULL, PRIMARY KEY (id) ); CREATE TABLE u_skills ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, skill_id INT UNSIGNED NOT NULL, users_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) );

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • NHibernate + Cannot insert the value NULL into...

    - by mybrokengnome
    I've got a MS-SQL database with a table created with this code CREATE TABLE [dbo].[portfoliomanager]( [idPortfolioManager] [int] NOT NULL PRIMARY KEY IDENTITY, [name] [varchar](45) NULL ) so that idPortfolioManager is my primary key and also auto-incrementing. Now on my Windows WPF application I'm using NHibernate to help with adding/updating/removing/etc. data from the database. Here is the class that should be connecting to the portfoliomanager table namespace PortfolioManager { [Class(Table="portfoliomanager",NameType=typeof(PortfolioManagerClass))] public class PortfolioManagerClass { [Id(Name = "idPortfolioManager")] [Generator(1, Class = "identity")] public virtual int idPortfolioManager { get; set; } [NHibernate.Mapping.Attributes.Property(Name = "name")] public virtual string name { get; set; } public PortfolioManagerClass() { } } } and some short code to try and insert something PortfolioManagerClass portfolio = new PortfolioManagerClass(); Portfolio.name = "Brad's Portfolios"; The problem is, when I try running this, I get this error: {System.Data.SqlClient.SqlException: Cannot insert the value NULL into column 'idPortfolioManager', table 'PortfolioManagementSystem.dbo.portfoliomanager'; column does not allow nulls. INSERT fails. The statement has been terminated... with an outer exception of {"could not insert: [PortfolioManager.PortfolioManagerClass][SQL: INSERT INTO portfoliomanager (name) VALUES (?); select SCOPE_IDENTITY()]"} I'm hoping this is the last error I'll have to solve with NHibernate just to get it to do something, it's been a long process. Just as a note, I've also tried setting Class="native" and unsaved-value="0" with the same error. Thanks! Edit: Ok removing the 1, from Generator actually allows the program to run (not sure why that was even in the samples I was looking at) but it actually doesn't get added to the database. I logged in to the server and ran the sql server profiler tool and I never see the connection coming through or the SQL its trying to run, but NHibernate isn't throwing an error anymore. Starting to think it would be easier to just write SQL statements myself :(

    Read the article

  • paypal_adaptive gem in Rails: Dynamic Receiver "Population" (Chained Payments)

    - by Jmlevick
    Note: I didn't find a better title for this O.o Hello, Humm... Look, what I want to do is to have a Rails app where a visitor can click a button/link to make a "special" chained payment using Paypal; Currently I have a Users registration form that has one field for the user to enter his/her paypal account email, and as I saw here: http://marker.to/XGg9MR it is possible to specify the primary reciever and the secondary ones by adding such info in a controller action when using the paypal_adaptive gem in a rails app. The thing is, I don't want to hard code the secondary reciever as I need to specify a different secondary reciever from time to time, (being specific my primary reciever will always be the same, but depending on what button/link the visitor clicks, the secondary one is going to change) and I want that secondary reciever email to be the paypal e-mail account from one of the registered users when the visitor clicks on their specific button/link... My question is: Is it possible to create such enviroment functionality in my app using the current implementation of the paypal_adaptive gem? Could someone point me in the right direction on how to accomplish such thing? I'm still learning rails and also I'm really new in the paypal handling universe with this framework! XD P.S. Thanks! :)

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • Jquery animate negative top and back to 0 - starts messing up after 3rd click

    - by Daniel Takyi
    The site in question is this one: http://www.pickmixmagazine.com/wordpress/ When you click on one of the posts (any of the boxes) an iframe will slide down from the top with the content in it. Once the "Home" button in the top left hand corner of the iframe is clicked, the iframe slides back up. This works perfectly the first 2 times, on the 3rd click on of a post, the content will slide down, but when the home button is clicked, the content slides back up normally but once it has slid all the way up to the position it should be in, the iframe drops straight back down to where it was before the home button was clicked, I click it again and then it works. Here is the code I've used for both sliding up and sliding down functions: /* slide down function */ var $div = $('iframe.primary'); var height = $div.height(); var width = parseInt($div.width()); $div.css({ height : height }); $div.css('top', -($div.width())); $('.post').click(function () { $('iframe.primary').load(function(){ $div.animate({ top: 0 }, { duration: 1000 }); }) return false; }); /* slide Up function */ var elm = parent.document.getElementsByTagName('iframe')[0]; var jelm = $(elm);//convert to jQuery Element var htmlElm = jelm[0];//convert to HTML Element $('.homebtn').click(function(){ $(elm).animate({ top: -height }, { duration: 1000 }); return false; })

    Read the article

  • Delete all previous records and insert new ones

    - by carlos
    When updating an employee with id = 1 for example, what is the best way to delete all previous records in the table certificate for this employee_id and insert the new ones?. create table EMPLOYEE ( id INT NOT NULL auto_increment, first_name VARCHAR(20) default NULL, last_name VARCHAR(20) default NULL, salary INT default NULL, PRIMARY KEY (id) ); create table CERTIFICATE ( id INT NOT NULL auto_increment, certificate_name VARCHAR(30) default NULL, employee_id INT default NULL, PRIMARY KEY (id) ); Hibernate mapping <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="Employee" table="EMPLOYEE"> <id name="id" type="int" column="id"> <generator class="sequence"> <param name="sequence">employee_seq</param> </generator> </id> <set name="certificates" lazy="false" cascade="all"> <key column="employee_id" not-null="true"/> <one-to-many class="Certificate"/> </set> <property name="firstName" column="first_name"/> <property name="lastName" column="last_name"/> <property name="salary" column="salary"/> </class> <class name="Certificate" table="CERTIFICATE"> <id name="id" type="int" column="id"> <param name="sequence">certificate_seq</param> </id> <property name="employee_id" column="employee_id" insert="false" update="false"/> <property name="name" column="certificate_name"/> </class> </hibernate-mapping>

    Read the article

  • What kind of data do I pass into a Django Model.save() method?

    - by poswald
    Lets say that we are getting POSTed a form like this in Django: rate=10 items= [23,12,31,52,83,34] The items are primary keys of an Item model. I have a bunch of business logic that will run and create more items based on this data, the results of some db lookups, and some business logic. I want to put that logic into a save signal or an overridden Model.save() method of another model (let's call it Inventory). The business logic will run when I create a new Inventory object using this form data. Inventory will look like this: class Inventory(models.Model): picked_items = models.ManyToManyField(Item, related_name="items_picked_set") calculated_items = models.ManyToManyField(Item, related_name="items_calculated_set") rate = models.DecimalField() ... other fields here ... New calculated_items will be created based on the passed in items which will be stored as picked_items. My question is this: is it better for the save() method on this model to accept: the request object (I don't really like this coupling) the form data as arguments or kwargs (a list of primary keys and the other form fields) a list of Items (The caller form or view will lookup the list of Items and create a list as well as pass in the other form fields) some other approach? I know this is a bit subjective, but I was wondering what the general idea is. I've looked through a lot of code but I'm having a hard time finding a pattern I like.

    Read the article

  • INSERT..ON DUPLICATE KEY UPDATE - but NOT using the duplicate key to compare.

    - by calumbrodie
    I am trying to solve a problem I have inherited with poor treatment of different data sources. I have a user table that contains BOTH good and evil users. create table `users`( `user_id` int(13) NOT NULL AUTO_INCREMENT , `email` varchar(255) , `name` varchar(255) , PRIMARY KEY (`user_id`) ); In this table the primary key is currently set to be user_id. I have another table ('users_evil') which contains ONLY the evil users (all the users from this table are included in the first table) - the user_id's on this table do NOT correspond to those in the first table. I want to have all my users in one table, and simply flag which are good and which are evil. What I want to do is alter the user table and add a column ('evil') which defaults to 0. I then want to dump the data from my 'users_evil') table and then run an INSERT..ON DUPLICATE KEY UPDATE with this data into the first table (setting 'evil'=1 where the emails match) The problem is that the 'PK' is set to the user_id and not the 'email'. Any suggestions, or even another strategy to successfully achive this. Can I run this statement but treat another column as PK only for the duration of the statement.

    Read the article

  • how to find missing rows in oracle

    - by user203212
    Hi, I have a table called 2 tables create table ORDERS ( ORDER_NO NUMBER(38,0) not null, ORDER_DATE DATE not null, SHIP_DATE DATE null, SHIPPING_METHOD VARCHAR2(12) null, TAX_STATUS CHAR(1) null, SUBTOTAL NUMBER null, TAX_AMT NUMBER null, SHIPPING_CHARGE NUMBER null, TOTAL_AMT NUMBER null, CUSTOMER_NO NUMBER(38,0) null, EMPLOYEE_NO NUMBER(38,0) null, BRANCH_NO NUMBER(38,0) null, constraint ORDERS_ORDERNO_PK primary key (ORDER_NO) ); and create table PAYMENTS ( PAYMENT_NO NUMBER(38,0) NOT NULL, CUSTOMER_NO NUMBER(38,0) null, ORDER_NO NUMBER(38,0) null, AMT_PAID NUMBER NULL, PAY_METHOD VARCHAR(10) NULL, DATE_PAID DATE NULL, LATE_DAYS NUMBER NULL, LATE_FEES NUMBER NULL, constraint PAYMENTS_PAYMENTNO_PK primary key (PAYMENT_NO) ); I am trying to find how many late orders each customer have. the column late_days in PAYMENTS table has how many days the customer is late for making payments for any particular order. so I am making this query SELECT C.CUSTOMER_NO, C.lname, C.fname, sysdate, COUNT(P.ORDER_NO) as number_LATE_ORDERS FROM CUSTOMER C, orders o, PAYMENTS P WHERE C.CUSTOMER_NO = o.CUSTOMER_NO AND P.order_no = o.order_no AND P.LATE_DAYS>0 group by C.CUSTOMER_NO, C.lname, C.fname That means, I am counting the orders those have any late payments and late_days0. But this is giving me only the customers who have any orders with late_days0, but the customers who does not have any late orders are not showing up. so if one customer has 5 orders with late payments then it is showing 5 for that customer, but if a customer have 0 late orders,that customer is not selected in this query. Is there any way to select all the customers , and if he has any late orders, it will show the number and also if he does not have any late orders, it will show 0.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • C++: Maybe you know this pitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Beginner Ques::How to delete records permanently in case of linked tables?

    - by Serenity
    Let's say I have these 2 tables QuesType and Ques:- QuesType QuestypeID|QuesType |Active ------------------------------------ 101 |QuesType1 |True 102 |QuesType2 |True 103 |XXInActiveXX |False Ques QuesID|Ques|Answer|QUesTypeID|Active ------------------------------------ 1 |Ques1|Ans1 |101 |True 2 |Ques2|Ans2 |102 |True 3 |Ques3|Ans3 |101 |True In the QuesType Table:- QuesTypeID is a Primary key In the Ques Table:- QuesID is a Primary key and QuesType ID is the Foreign Key that refernces QuesTypeID from QuesType Table Now I am unable to delete records from QuesType Table, I can only make QuesType inactive by setting Active=False. I am unable to delete QuesTypes permanently because of the Foreign key relation it has with Ques Table. So , I just set the column Active=false and those Questypes then don't show on my grid wen its binded. What I want to do is be able to delete any QuesType permamnently. Now it can only be deleted if its not being used anywhere in the Ques table, right? So to delete any QuesType permanently I thot this is what I could do:- In the grid that displays QuesTypes, I have this check box for Active and a button for delete.What I thot was, when a user makes some QuesType inactive then OnCheckChanged() event will run and that will have the code to delete all the Questions in Ques table that are using that QuesTypeID. Then on the QuesType grid, that QuesType would show as Deactivated and only then can a user delete it permanently. Am I thinking correctly? Currently in my DeleteQuesType Stored Procedure what I am doing is:- Setting the Active=false and Setting QuesTye= some string like XXInactiveXX Is there any other way?

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >