Search Results

Search found 34305 results on 1373 pages for 'self referencing table'.

Page 451/1373 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Hotkey to toggle checkboxes does opposite

    - by Joel Harris
    In this jsFiddle, I have a table with checkboxes in the first column. The master checkbox in the table header functions as expected toggling the state of all the checkboxes in the table when it is clicked. I have set up a hotkey for "shift-x" to toggle the master checkbox. The desired behavior when using the hotkey is: The master checkbox is toggled The child checkboxes each have their checked state set to match the master But what is actually happening is the opposite... The child checkboxes each have their checked state set to match the master The master checkbox is toggled Here is the relevant code $(".master-select").click(function(){ $(this).closest("table").find("tbody .row-select").prop('checked', this.checked); }); function tickAllCheckboxes() { var master = $(".master-select").click(); } //using jquery.hotkeys.js to assign hotkey $(document).bind('keydown', 'shift+x', tickAllCheckboxes); This results in the child checkboxes having the opposite checked state of the master checkbox. Why is that happening? A fix would be nice, but I'm really after an explanation so I can understand what is happening.

    Read the article

  • MVC : Checkboxes generated using JavaScript not appearing in FormCollection on postback

    - by Andy Evans
    I took over another project (written by one contractor, modified by another and now it's not working) written using MVC/C# where a view that has a table (see below) is dynamically populated using JSON/Javascript - the first column of which is a checkbox. View (spark view engine) <table id='component_list' name='component_list' cellpadding='0' border='0' cellspacing='0'> <thead> <tr> <th>&nbsp;</th> <th>Component</th> <th>Component Type</th> <th>Evenflo Part #</th> <th>Supplier Part #</th> <th>Supplier</th> <th>Requirement</th> <th>Location</th> <th>Region</th> </tr> </thead> <tbody> </tbody> </table> When the page is rendered, I look at the source for the page and do not see the table data (I wouldn't expect to see this). However, when the form is posted back, controller, the FormCollection is empty. Supposedly this had been working before the last contractor got their hands on it - which is another post all together. My goal right now is having the checkboxes in the FormCollection. Any suggestions would be greatly appreciated. Thanks,

    Read the article

  • Android ExpandableListView From Database

    - by SterAllures
    I want to create a two level ExpandableListView from a local database. The group level I want to get the names from the database Category table category_id | name ------------------- 1 | NameOfCategory the children level I want to get the names from the List table list_id | name | foreign_category_id -------------------------------- 1 | Listname | 1 I've got a method in DatabaseDAO to get all the values from the table public List<Category> getAllCategories() { database = dbHelper.getReadableDatabase(); List<Category> categories = new ArrayList<Category>(); Cursor cursor = database.query(SQLiteHelper.TABLE_CATEGORY, null, null, null, null, null, null); cursor.moveToFirst(); while(!cursor.isAfterLast()) { Category category = cursorToCategory(cursor); categories.add(category); cursor.moveToNext(); } cursor.close(); return categories; } Now adding the names to the group level is easy I do it the following way: ArrayList<String> groups; for(Category c : databaseDao.getAllCategories()) { groups.add(c.getName()); } Now I want to add the children to the ExpandableListView in the following array ArrayList<ArrayList<ArrayList<String>>> children;. How do I get the children under the correct group name? I think it has to do somenthing with groups.indexOf() but in the list table I only have a foreign category_id and not the actual name.

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • Hibernate Bi- Directional many to many mapping advice!

    - by Rob
    hi all, i woundered if anyone might be able to help me out. I am trying to work out what to google for (or any other ideas!!) basically i have a bidirectional many to many mapping between a user entity and a club entity (via a join table called userClubs) I now want to include a column in userClubs that represents the role so that when i call user.getClubs() I can also work out what level access they have. Is there a clever way to do this using hibernate or do i need to rethink the database structure? Thank you for any help (or just for reading this far!!) the user.hbm.xml looks a bit like <set name="clubs" table="userClubs" cascade="save-update"> <key column="user_ID"/> <many-to-many column="activity_ID" class="com.ActivityGB.client.domain.Activity"/> </set> the activity.hbm.xml part <set name="members" inverse="true" table="userClubs" cascade="save-update"> <key column="activity_ID"/> <many-to-many column="user_ID" class="com.ActivityGB.client.domain.User"/> </set> The current userClubs table contains the fields id | user_ID | activity_ID I would like to include in there id | user_ID | activity_ID | role and be able to access the role on both sides...

    Read the article

  • array or list into Oracle using cfprocparam

    - by Travis
    I have a list of values I want to insert into a table via a stored procedure. I figured I would pass an array to oracle and loop through the array but I don't see how to pass an array into Oracle. I'd pass a list but I don't see how to work with the list to turn it into an array using PL/SQL (I'm fairly new to PL/SQL). Am I approaching this the wrong way? Using Oracle 9i and CF8. TIA! EDIT Perhaps I'm thinking about this the wrong way? I'm sure I'm not doing anything new here... I figured I'd convert the list to an associative array then loop the array because Oracle doesn't seem to work well with lists (in my limited observation). I'm trying to add a product, then add records for the management team. -- product table productName = 'foo' productDescription = 'bar' ... ... etc -- The managementteam table just has the id of the product and id of the users selected from a drop down. The user IDs are passed in via a list like "1,3,6,20" How should I go about adding the records to the management team table?

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

  • regex to escape non-html tags' angle brackets

    - by suad
    Hi I have an html based text (with html tags), I want to find words that occur within angle brackets and replace the brackets with < and > or even when angle brackets are used as math symobls e.g: String text= "Hello, <b> Whatever <br/> <table> <tr> <td width="300px"> 1<2 This is a <test> </td> </tr> </table>"; I want this to be : Hello, <b> Whatever <br/> <table> <tr> <td width="300px"> 1&lt;2 This is a &lt;test&gt; </td> </tr> </table> THANKS in advance

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Core Animation performance on iphone

    - by nico
    I'm trying to do some animations using Core Animation on the iphone. I'm using CABasicAnimation on CALayer. It's a straight forward animation from a random place at the top of the screen to the bottom of the screen at random speed, I have 30 elements that doing the same animation continuously until another action happens. But the performance on the iPhone 3G is very sluggish when the animations start. The image is only 8k. Is this the right approach? How should I change so it performs better. // image cached somewhere else. CGImageRef imageRef = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:@"png"]] CGImage]; - (void)animate:(NSTimer *)timer { int startX = round(radom() % 320); float speed = 1 / round(random() % 100 + 2); CALayer *layer = [CALayer layer]; layer.name = @"layer"; layer.contents = imageRef; // cached image layer.frame = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)); int width = layer.frame.size.width; int height = layer.frame.size.height; layer.frame = CGRectMake(startX, self.view.frame.origin.y, width, height); [effectLayer addSublayer:layer]; CGPoint start = CGPointMake(startX, 0); CGPoint end = CGPointMake(startX, self.view.frame.size.height); float repeatCount = 1e100; CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"position"]; animation.delegate = self; animation.fromValue = [NSValue valueWithCGPoint:start]; animation.toValue = [NSValue valueWithCGPoint:end]; animation.duration = speed; animation.repeatCount = repeatCount; animation.autoreverses = NO; animation.removedOnCompletion = YES; animation.fillMode = kCAFillModeForwards; [layer addAnimation:animation forKey:@"position"]; } The animations are fired off using a NSTimer. animationTimer = [NSTimer timerWithTimeInterval:0.2 target:self selector:@selector(animate:) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:animationTimer forMode:NSDefaultRunLoopMode];

    Read the article

  • Adding space BETWEEN cells in TableCell()

    - by user279521
    My code is below. I want to add space between cells in tableCell() something the equivalent of <td></td>&nbsp;&nbsp;<td></td> I am NOT looking for cellpadding or cellspacing because they both add space between the left cell border and the left table border (wall); I want to add space between two CELLS, not the table and the cell; protected void Page_Init(object sender, EventArgs e) { Table tb = new Table(); tb.ID = "tb1"; for (int i = 0; i < iCounter; i++) { TableRow tr = new TableRow(); TextBox tbox = new TextBox(); tbox.ID = i.ToString(); TableCell tc = new TableCell(); tc.Controls.Add(tbox); tr.Cells.Add(tc); tb.Rows.Add(tr); } pnlScheduler.Controls.Add(tb); } Any ideas?

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • Optimization headers for UITableView?

    - by Pask
    I have an optimization problem for the headers of a table with plain style. If I use the standard view for the table (the classic gray with titles set by titleForHeaderInSection:) everything is ok and the scrolling is smooth and immediate. When, instead, use this code to set my personal view: - (UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section { return [self headerPerTitolo:[titoliSezioni objectAtIndex:section]]; } - (UIImageView *)headerPerTitolo:(NSString *)titolo { UIImageView *headerView = [[[UIImageView alloc] initWithFrame:CGRectMake(10.0, 0.0, 320.0, 44.0)] autorelease]; headerView.image = [UIImage imageNamed:kNomeImmagineHeader]; headerView.alpha = kAlphaSezioniTablePlain; UILabel * headerLabel = [[[UILabel alloc] initWithFrame:CGRectZero] autorelease]; headerLabel.backgroundColor = [UIColor clearColor]; headerLabel.opaque = NO; headerLabel.textColor = [UIColor whiteColor]; headerLabel.font = [UIFont boldSystemFontOfSize:16]; headerLabel.frame = CGRectMake(10.0,-11.0, 320.0, 44.0); headerLabel.textAlignment = UITextAlignmentLeft; headerLabel.text = titolo; [headerView addSubview:headerLabel]; return headerView; } scrolling is jerky and not immediate (sliding the finger on the screen does not match an immediate shift of the table). I do not know what caused this problem, maybe the fact that every time the method viewForHeaderInSection: is called, the code runs to create a new UIImageView. I tried many ways to solve the problem, such as creating an array of all the necessary view: apart from more time spent loading at startup, there is a continuing problem of low reactivity of the table. 've Attempted by reducing the size of UIImageView positioned from about 66 KB to 4 KB: not only has a deterioration in quality of colors (which distorts a bit 'original graphics), but ... the problem persists! Perhaps you have suggestions about it, or know me obscure techniques that enable me to optimize this aspect of my application ... I apologize for my English, I used Google for translation.

    Read the article

  • Why are virtual methods considered early bound?

    - by AspOnMyNet
    One definition of binding is that it is the act of replacing function names with memory addresses. a) Thus I assume early binding means function calls are replaced with memory addresses during compilation process, while with late binding this replacement happens during runtime? b) Why are virtual methods also considered early bound (thus the target method is found at compile time, and code is created that will call this method)? As far as I know, with virtual methods the call to actual method is resolved only during runtime and not compile time?! thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) v-table calls are neither early nor late bound. Instead there's an offset into a table of function pointers. The offset is fixed at compile time, but which table the function pointer is chosen from depends on the runtime type of the object (the object contains a hidden pointer to its v-table), so the final function address is found at runtime. But assuming the object of type T is created via reflection ( thus app doesn’t even know of existence of type T ), then how can at compile time exist an entry point for that type of object?

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • Statusbar overlaps ViewController

    - by Stefan
    Hi, I in my AppDelegate, I use: ActivitiesViewController *acController = [[ActivitiesViewController alloc] initWithNibName:@"ActivitiesView" bundle:[NSBundle mainBundle]]; UINavigationController *acNavController = [[UINavigationController alloc] initWithRootViewController:acController]; [self.tabBarController setSelectedIndex:0]; [self.tabBarController setSelectedViewController:acNavController]; To switch the views in my TabBarController. The result is to close to the window top: How do I get my view to correct position? Regards

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >