Search Results

Search found 3888 results on 156 pages for 'star schema'.

Page 139/156 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Login fails when recreating database with Code First

    - by Mun
    I'm using ASP.NET Entity Framework's Code First to create my database from the model, and the login seems to fail when the database needs to be recreated after the model changes. In Global.asax, I've got the following: protected void Application_Start() { Database.SetInitializer(new DropCreateDatabaseIfModelChanges<EntriesContext>()); // ... } In my controller, I've got the following: public ActionResult Index() { // This is just to force the database to be created var context = new EntriesContext(); var all = (from e in context.Entries select e).ToList(); } When the database doesn't exist, it is created with no problems. However, when I make a change to the model, rebuild and refresh, I get the following error: Login failed for user 'sa'. My connection string looks like this: <add name="EntriesContext" connectionString="Server=(LOCAL);Database=MyDB;User Id=sa;Password=password" providerName="System.Data.SqlClient" /> The login definitely works as I can connect to the server and the database from Management Studio using these credentials. If I delete the database manually, everything works correctly and the database is recreated as expected with the schema reflecting the changes made to the model. It seems like either the password or access to the database is being lost. Is there something else I need to do to get this working?

    Read the article

  • Checking if user owns file before deleting it

    - by Martin Hoe
    I'm building an API for my site that allows users to delete the files they upload. Obviously, I want to check if the user owns that file before they delete it through the API. I have a files table and a users table, here's the schema: f_id, s_id, u_id, name, size, uploaded u_id, username, password, email, activated, activation_code u_id is a foreign key. The u_id field in the files table points to the u_id in the users table. Given the users username, I want to find the users u_id, and then check if they own the file through the file ID (f_id). I wrote this SQL: $sql = 'SELECT u.username FROM `users` u JOIN `files` f ON u.u_id = f.u_id WHERE f_id = ? AND u.u_id = ? LIMIT 1'; I'm assuming that'd work if I was given the users u_id in the API request, but alas I'm given only their username. How can I modify that SQL to find their user ID and use that? Thanks. Edit: Alright I've got this query but it's always returning an empty result set even though both the file ID and username exist. SELECT u.username FROM `users` u JOIN `files` f ON u.u_id = f.u_id WHERE f.f_id = ? AND u.username = ? LIMIT 1

    Read the article

  • Ruby on Rails How do I access variables of a model inside itself like in this example?

    - by banditKing
    I have a Model like so: # == Schema Information # # Table name: s3_files # # id :integer not null, primary key # owner :string(255) # notes :text # created_at :datetime not null # updated_at :datetime not null # last_accessed_by_user :string(255) # last_accessed_time_stamp :datetime # upload_file_name :string(255) # upload_content_type :string(255) # upload_file_size :integer # upload_updated_at :datetime # class S3File < ActiveRecord::Base #PaperClip methods attr_accessible :upload attr_accessor :owner Paperclip.interpolates :prefix do |attachment, style| I WOULD LIKE TO ACCESS VARIABLE= owner HERE- HOW TO DO THAT? end has_attached_file( :upload, :path => ":prefix/:basename.:extension", :storage => :s3, :s3_credentials => {:access_key_id => "ZXXX", :secret_access_key => "XXX"}, :bucket => "XXX" ) #Used to connect to users through the join table has_many :user_resource_relationships has_many :users, :through => :user_resource_relationships end Im setting this variable in the controller like so: # POST /s3_files # POST /s3_files.json def create @s3_file = S3File.new(params[:s3_file]) @s3_file.owner = current_user.email respond_to do |format| if @s3_file.save format.html { redirect_to @s3_file, notice: 'S3 file was successfully created.' } format.json { render json: @s3_file, status: :created, location: @s3_file } else format.html { render action: "new" } format.json { render json: @s3_file.errors, status: :unprocessable_entity } end end end Thanks, any help would be appreciated.

    Read the article

  • Allowing Xform controls for optional XML elements

    - by Cam
    Hi, In designing an XForm interface to an XML database (using eXist and XSLTForms), I'd like to include an input control for an optional element. The XML data records already exist and while some contain the optional element, others don't. To update a record, I'm using the existing XML record as the model instance. The problem is that the form control is not displayed when the optional element is not present, which is logical, but presents a problem when a user wants to add data to the optional element. To be more explicit, here's an example data record, data.xml: <a> <b>content</b> </a> with RNC schema: start = element a { element b { text }, element notes { text }? } XForms model: <xf:model> <xf:instance xmlns="" src="data.xml"/> <xf:submission id="save" method="post" action="update.xq" /> </xf:model> And control: <xf:input ref="/a/notes"> <xf:label>Notes (optional): </xf:label> </xf:input> The problem is that the 'Notes' input control is simply not displayed. An obvious solution is to add a trigger button to allow the user to insert the element if needed, but it is preferable to just have the input control appear, and be empty. My question is: is there some subtle combination of lesser-know attributes/binds/multiple instances/xpath expressions that will cause the control to always be displayed? Thanks

    Read the article

  • calculating change (over a period) for a dated field

    - by morpheous
    I have two tables with the following schema: CREATE TABLE sales_data ( sales_time date NOT NULL, product_id integer NOT NULL, sales_amt double NOT NULL ); CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); I want to write two types of queries that will allow me to calculate: period on period change (e.g. week on week change) change in period on period change (e.g. change in week on week change) I would prefer to write this in ANSI SQL, since I dont want to be tied to any particular db. [Edit] In light of some of the comments, if I have to be tied to a single database (in terms of SQL dialect), it will have to be PostgreSQL The queries I want to write are of the form (pseudo SQL of course): Query Type 1 (Period on Period Change) ======================================= a). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) b). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as month_on_month_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) Query Type 2 (Change in Period on Period Change) ================================================= a). select product_id, ((a2.week_on_week_change - a1.week_on_week_change)/a1.week_on_week_change) as change_on_week_on_week_change from (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a1), (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a2) WHERE {SOME OTHER CRITERIA}

    Read the article

  • Attribute value nil

    - by mridula
    Can someone tell me why is this happening? I have created a social networking website using Ruby on Rails. This is my first time programming with RoR. I have a model named "Friendship" which contains an attribute "blocked" to indicate whether the user has blocked another user. When I run the following in IRB - friendship = u.friendships.where(:friend_id => 22).first IRB gives me - Friendship Load (0.6ms) SELECT `friendships`.* FROM `friendships` WHERE `friendships`.`user_id` = 17 AND `friendships`.`friend_id` = 22 LIMIT 1 => #<Friendship id: 33, user_id: 17, friend_id: 22, created_at: "2012-04-07 10:29:49", updated_at: "2012-04-07 10:29:49", blocked: 1> As u can see, the "blocked" attribute has value '1'. But when I run the following 1.9.2-p290 :030 > friendship.blocked => nil - it says, the value of blocked is 'nil' and not '1'. Why is this happening? This could be a very silly mistake but I am new to RoR, so kindly help me! I initially didn't include the accessor method for 'blocked'.. I tried that, and still its giving the same result.. Following is the Friendship model.. class Friendship < ActiveRecord::Base belongs_to :friend, :class_name => "User" validates_uniqueness_of :friend_id , :scope => :user_id attr_accessor :blocked attr_accessible :blocked end Here is the schema of the table: 1.9.2-p290 :009 > friendship.class => Friendship(id: integer, user_id: integer, friend_id: integer, created_at: datetime, updated_at: datetime, blocked: integer)

    Read the article

  • Copy subset of xml input using xslt

    - by mdfaraz
    I need an XSLT file to transform input xml to another with a subset of nodes in the input xml. For ex, if input has 10 nodes, I need to create output with about 5 nodes Input <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> <DepartmentSeq>7</DepartmentSeq> <InsertDateTime>2011-09-29T13:19:28.817-05:00</InsertDateTime> </Department> Output: <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> </Department> I found one way to suppress nodes that we dont need XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="Department/DepartmentSeq"/> <xsl:template match="Department/InsertDateTime"/> </xsl:stylesheet> I need an xslt that helps me select the nodes I need and not "copy all and filter out what I dont need", since i may have to change my xslt whenever input schema adds more nodes.

    Read the article

  • :include and table aliasing

    - by dondo
    I'm suffering from a variant of the problem described here: ActiveRecord assigns table aliases for association joins fairly unpredictably. The first association to a given table keeps the table name. Further joins with associations to that table use aliases including the association names in the path... but it is common for app developers not to know about [other] joins at coding time. In my case I'm being bitten by a toxic mix of has_many and :include. Many tables in my schema have a state column, and the has_many wants to specify conditions on that column: has_many :foo, :conditions => {:state => 1}. However, since the state column appears in many tables, I disambiguate by explicitly specifying the table name: has_many :foo, :conditions => "this_table.state = 1". This has worked fine until now, when for efficiency I want to add an :include to preload a fairly deep tree of data. This causes the table to be aliased inconsistently in different code paths. My reading of the tickets referenced above is that this problem is not and will not be fixed in Rails 2.x. However, I don't see any way to apply the suggested workaround (to specify the aliased table name explicitly in the query). I'm happy to specify the table alias explicitly in the has_many statement, but I don't see any way to do so. As such, the workaround doesn't appear applicable to this situation (nor, I presume, in many 'named_scope' scenarios). Is there a viable workaround?

    Read the article

  • What am I doing wrong with my ItemsControl & databinding?

    - by Joel
    I'm reworking my simple hex editor to practice using what I've recently learned about data binding in WPF. I'm not sure what I'm doing wrong here. As I understand it, for each byte in the collection "backend" (inherits from ObservableCollection), my ItemsControl should apply the DataTemplate under resources. This template is just a textbox with a binding to a value converter. So I'm expecting to see a row of textboxes, each containing a string representation of one byte. When I use this XAML, all I get is a single line of uneditable text, which as far as I can tell doesn't use a textbox. What am I doing wrong? I've pasted my XAML in below, with the irrelevant parts (Menu declaration, schema, etc) removed. <Window ...> <Window.Resources> <local:Backend x:Key="backend" /> <local:ByteConverter x:Key="byteConverter" /> <DataTemplate DataType="byte"> <TextBox Text="{Binding Converter={StaticResource byteConverter}}" /> </DataTemplate> </Window.Resources> <StackPanel> <ItemsControl ItemsSource="{Binding Source={StaticResource backend}}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <WrapPanel /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </StackPanel> </Window>

    Read the article

  • MySQL Join Question

    - by rbaker86
    Hi i'm struggling to write a particular MySQL Join Query. I have a table containing product data, each product can belong to multiple categories. This m:m relationship is satisfied using a link table. For this particular query I wish to retrieve all products belonging to a given category, but with each product record, I also want to return the other categories that product belongs to. Ideally I would like to achieve this using an Inner Join on the categories table, rather than performing an additional query for each product record, which would be quite inefficient. My simplifed schema is designed roughly as follows: products table: product_id, name, title, description, is_active, date_added, publish_date, etc.... categories table: category_id, name, title, description, etc... product_category table: product_id, category_id I have written the following query, which allows me to retrieve all the products belonging to the specified category_id. However, i'm really struggling to work out how to retrieve the other categories a product belongs to. SELECT p.product_id, p.name, p.title, p.description FROM prod_products AS p LEFT JOIN prod_product_category AS pc ON pc.product_id = p.product_id WHERE pc.category_id = $category_id AND UNIX_TIMESTAMP(p.publish_date) < UNIX_TIMESTAMP() AND p.is_active = 1 ORDER BY p.name ASC I'd be happy just retrieving the category id's releated to each returned product row, as I will have all category data stored in an object, and my application code can take care of the rest. Many thanks, Richard

    Read the article

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • EF code first error "The specified index already exists. [ IX_Id ]" for object tree

    - by PascalN
    Using EF code first 4.3 I'm trying to model an object tree with a required-required relationships and a required-optional relationships. Here is a simple representation of those classes public class Top { public int Id { get; set; } public virtual Middle Middle { get; set; } } public class Middle { public int Id { get; set; } public virtual Child Child { get; set; } } public class Child { public int Id { get; set; } } Here is the OnModelCreating code modelBuilder.Entity<Top>().HasRequired(t => t.Middle).WithRequiredPrincipal().WillCascadeOnDelete(); modelBuilder.Entity<Middle>().HasRequired(t => t.Child).WithOptional().WillCascadeOnDelete(); This produces the error "The specified index already exists. [ IX_Id ]" on SQLCE After checking the db schema, both model binder fluent API configuration lines create an index IX_Id on the table Middles. Does anyone know how to work around that problem? Is there a way to set the index name? Thank you! Pascal

    Read the article

  • Website badge system

    - by linkyndy
    I am currently working on a widget-based website, built entirely on user socialization. Since a reputation system pays off for attracting users, I decided to implement one of these. Now, I would like to hear some solutions on how should this be implemented the right way (take, for example, Foursquare's badge system). Basically, I need to be able to do the following: have a badges table, where I can add, edit and delete badges; be able to enable and disable a badge; be able to introduce a new badge, but without writing new code - simply give some parameters to the add badge form regarding what should be followed in order for a user to receive a badge; be able to give badges in real time - meaning that whenever a user accomplishes whatever it needs to receive a badge, the system should know immediately to give the badge to that user; also, the system should not be overloaded with "badge listeners" - I believe interrogating each user request with every badge requirements is time consuming; These being said, I would like to hear your opinions on how to implement the right way a badge system (logic, database schema, methods etc.) Thank you very much!

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • Is this a good way to generically deserialize objects?

    - by Damien Wildfire
    I have a stream onto which serialized objects representing messages are dumped periodically. The objects are one of a very limited number of types, and other than the actual sequence of bytes that arrives, I have no way of knowing what type of message it is. I would like to simply try to deserialize it as an object of a particular type, and if an exception is thrown, try again with the next type. I have an interface that looks like this: public interface IMessageHandler<T> where T : class, IMessage { T Handle(string message); } // elsewhere: // (These are all xsd.exe-generated classes from an XML schema.) public class AppleMessage : IMessage { ... } public class BananaMessage : IMessage { ... } public class CoconutMessage : IMessage { ... } Then I wrote a GenericHandler<T> that looks like this: public class GenericHandler<T> : IMessageHandler<T> where T: class, IMessage { public class MessageHandler : IMessageHandler { T IMessageHandler.Handle(string message) { T result = default(T); try { // This utility method tries to deserialize the object with an // XmlSerializer as if it were an object of type T. result = Utils.SerializationHelper.Deserialize<T>(message); } catch (InvalidCastException e) { result = default(T); } return result; } } } Two questions: Using my GenericHandler<T> (or something similar to it), I'd now like to populate a collection with handlers that each handle a different type. Then I want to invoke each handler's Handle method on a particular message to see if it can be deserialized. If I get a null result, move onto the next handler; otherwise, the message has been deserialized. Can this be done? Is there a better way to deserialize data of unknown (but restricted) type?

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • Generate Spring bean definition from a Java object

    - by joeslice
    Let's suggest that I have a bean defined in Spring: <bean id="neatBean" class="com..." abstract="true">...</bean> Then we have many clients, each of which have slightly different configuration for their 'neatBean'. The old way we would do it was to have a new file for each client (e.g., clientX_NeatFeature.xml) that contained a bunch of beans for this client (these are hand-edited and part of the code base): <bean id="clientXNeatBean" parent="neatBean"> <property id="whatever" value="something"/> </bean> Now, I want to have a UI where we can edit and redefine a client's neatBean on the fly. My question is: given a neatBean, and a UI that can 'override' properties of this bean, what would be a straightforward way to serialize this to an XML file as we do [manually] today? For example, if the user set property whatever to be "17" for client Y, I'd want to generate: <bean id="clientYNeatBean" parent="neatBean"> <property id="whatever" value="17"/> </bean> Note that moving this configuration to a different format (e.g., database, other-schema'd-xml) is an option, but not really an answer to the question at hand.

    Read the article

  • Acessing elements of this xml

    - by csU
    <wsdl:definitions targetNamespace="http://www.webserviceX.NET/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://www.webserviceX.NET/"> <s:element name="ConversionRate"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="FromCurrency" type="tns:Currency"/> <s:element minOccurs="1" maxOccurs="1" name="ToCurrency" type="tns:Currency"/> </s:sequence> </s:complexType> </s:element> <s:simpleType name="Currency"> <s:restriction base="s:string"> <s:enumeration value="AFA"/> <s:enumeration value="ALL"/> <s:enumeration value="DZD"/> <s:enumeration value="ARS"/> i am trying to get at all of the elements in enumeration but cant seem to get it right. This is homework so please no full solutions, just guidance if possible. $feed = simplexml_load_file('http://www.webservicex.net/CurrencyConvertor.asmx?WSDL'); foreach($feed->simpleType as $val){ $ns s = $val->children('http://www.webserviceX.NET/'); echo $ns_s -> enumeration; } what am i doing wrong? thanks

    Read the article

  • creating tables in ruby-on-rails 3 through migrations?

    - by fayer
    im trying to understand the process of creating tables in ruby-on-rails 3. i have read about migrations. so i am supposed to create tables by editing in the files in: Database Migrations/migrate/20100611214419_create_posts Database Migrations/migrate/20100611214419_create_categories but they were generated by: rails generate model Post name:string description:text rails generate model Category name:string description:text does this mean i have to use "rails generate model" command everytime i want to create a table? what if i create a migration file but want to add columns. do i create another migration file for adding those or do i edit the existing migration file directly? the guide told me to add a new one, but here is the part i dont understand. why would i add a new one? cause then the new state will be dependent of 2 migration files. in symfony i just edit a schema.yml file directly, there are no migration files with versioning and so on. im new to RoR and want to get the picture of creating tables. thanks

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • What is a good solution to log the deletion of a row in MySQL?

    - by hobodave
    Background I am currently logging deletion of rows from my tickets table at the application level. When a user deletes a ticket the following SQL is executed: INSERT INTO alert_log (user_id, priority, priorityName, timestamp, message) VALUES (9, 4, 'WARN', NOW(), "TICKET: David A. deleted ticket #6 from Foo"); Please do not offer schema suggestions for the alert_log table. Fields: user_id - User id of the logged in user performing the deletion priority - Always 4 priorityName - Always 'WARN' timestamp - Always NOW() message - Format: "[NAMESPACE]: [FullName] deleted ticket #[TicketId] from [CompanyName]" NAMESPACE - Always TICKET FullName - Full name of user identified by user_id above TicketId - Primary key ID of the ticket being deleted CompanyName - Ticket has a Company via tickets.company_id Situation/Questions Obviously this solution does not work if a ticket is deleted manually from the mysql command line client. However, now I need to. The issues I'm having are as follows: Should I use a PROCEDURE, FUNCTION, or TRIGGER? -- Analysis: TRIGGER - I don't think this will work because I can't pass parameters to it, and it would trigger when my application deleted the row too. PROCEDURE or FUNCTION - Not sure. Should I return the number of deleted rows? If so, that would require a FUNCTION right? How should I account for the absence of a logged in user? -- Possibilities: Using either a PROC or FUNC, require the invoker to pass in a valid user_id Require the user to pass in a string with the name Use the CURRENT_USER - meh Hard code the FullName to just be "Database Administrator" Could the name be an optional parameter? I'm rather green when it comes to sprocs. Assuming I went with the PROC/FUNC approach, is it possible to outright restrict regular DELETE calls to this table, yet still allow users to call this PROC/FUNC to do the deletion for them? Ideally the solution is usable by my application as well, so that my code is DRY.

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >