Search Results

Search found 31606 results on 1265 pages for 'generate table'.

Page 523/1265 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Mixing a4 and a3 in one Latex file

    - by jorgusch
    Hello, I am finishing my thesis and have a large appendix. Some tables only look good, if first done on a3 and then (paper)printed on a4. Anyhow, working all the files seperalty is fine, but I struggle to compile all in one. I use the geometry package and start the document with: \usepackage[a4paper,left=30mm,right=20mm,top=20mm, bottom=20mm]{geometry} For the appendix, I want to include the table and use \newgeometry{a3paper,left=25mm,right=15mm, top=15mm, bottom=15mm} However, the command is completely ignored and I can read "a3paper,left=25mm,right=15mm, top=15mm, bottom=15mm" above the table. What did I miss? Is it even possible? If not, how do I get the numbers right, if I have to include it as a pdf (which works)? Thanks!

    Read the article

  • Enforcing a query in MySql to use a specific index

    - by Hossein
    Hi, I have large table. consisting of only 3 columns (id(INT),bookmarkID(INT),tagID(INT)).I have two BTREE indexes one for each bookmarkID and tagID columns.This table has about 21 Million records. I am trying to run this query: SELECT bookmarkID,COUNT(bookmarkID) AS count FROM bookmark_tag_map GROUP BY tagID,bookmarkID HAVING tagID IN (-----"tagIDList"-----) AND count >= N which takes ages to return the results.I read somewhere that if make an index in which it has tagID,bookmarkID together, i will get a much faster result. I created the index after some time. Tried the query again, but it seems that this query is not using the new index that I have made.I ran EXPLAIN and saw that it is actually true. My question now is that how I can enforce a query to use a specific index? also comments on other ways to make the query faster are welcome. Thanks

    Read the article

  • Bulk Insert takes 4x as long on first operation of the day

    - by patrick
    I do Bulk Inserts into a table with about 14 million rows at fiver minute increments during a 7 hour period during the day. These inserts take somewhere between 9-14 secs. However, the first insert always takes about 40 secs. Anyone know what SQL Server 2005 would be doing differently on the first insert into a table for that day? From what I've read I should probably use the SqlBulkCopy class instead of just using a bulk insert in a stored procedure. Is that that the general consensus?

    Read the article

  • How to tune ASP.NET CreateUserWizard?

    - by Max
    I have created ASP.NET WebForms site on IIS 7.5. I want to create step by step user registration. I want to store the basic and detailed information about registered users in a specially created database table (not in aspnet_users table). I want to validate email first and then prevent next registration step for the user whose email address already exists in the database. At the last registration step I want to present summary form. All previous input and select fields should be duplicated in this form with "disabled" attribute. Please tell me how to adjust CreateUserWizard ASP.NET Control and web.config file to these needs?

    Read the article

  • Get forum page by PostID

    - by cem
    I can't figure out how it's working. Like this. How is this get page number -and records- by post id? I think the first option is; declare an index / int variable in post table and increase-decrease it when adding and deleting post. but whats happen when i delete first row and if table has one million records? Do you have any idea about this? by the way, i'm using nhibernate and sql server 2005. Thank you

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • Storing website hierarchy in Sql Server 2008

    - by Mika Kolari
    I want to store website page hierarchy in a table. What I would like to achieve is efficiently 1) resolve (last valid) item by path (e.g. "/blogs/programming/tags/asp.net,sql-server", "/blogs/programming/hello-world" ) 2) get ancestor items for breadcrump 3) edit an item without updating the whole tree of children, grand children etc. Because of the 3rd point I thought the table could be like ITEM id type slug title parentId 1 area blogs Blogs 2 blog programming Programming blog 1 3 tagsearch tags 2 4 post hello-world Hello World! 2 Could I use Sql Server's hierarchyid type somehow (especially point 1, "/blogs/programming/tags" is the last valid item)? Tree depth would usually be around 3-4. What would be the best way to achieve all this?

    Read the article

  • How to build an interactive search engine web interface using python

    - by asmaier
    I have build a static web interface for searching data from some tables in my PostgreSQL database. The query website consists of a simple textfield for entering the search term, the result website presents the results as a simple html table. The server side code for searching the PostgreSQL database and returning the results is written in python using psycopg2. Now I would like to add some interactive "Ajax features" to my search engine. When entering the search term I would like to be able to see a list of possible search terms like Google does it. On the results page, I would like to be able to sort the table showing the results. What would be the easiest/recommended way to implement these features for my search engine web site? Do I need a full-fledged web framework like Django for that?

    Read the article

  • NHibernate Mapping problem

    - by Bernard Larouche
    My database is being driven by my NHibernate mapping files. I have a Category class that looks like the following : public class Category { public Category() : this("") { } public Category(string name) { Name = name; SubCategories = new List<Category>(); Products = new HashSet<Product>(); } public virtual int ID { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } public virtual Category Parent { get; set; } public virtual bool IsDefault { get; set; } public virtual ICollection<Category> SubCategories { get; set; } public virtual ICollection<Product> Products { get; set; } and here is my Mapping file : <property name="Name" column="Name" type="string" not-null="true"/> <property name="IsDefault" column="IsDefault" type="boolean" not-null="true" /> <property name="Description" column="Description" type="string" not-null="true" /> <many-to-one name="Parent" column="ParentID"></many-to-one> <bag name="SubCategories" inverse="true"> <key column="ParentID"></key> <one-to-many class="Category"/> </bag> <set name="Products" table="Categories_Products"> <key column="CategoryId"></key> <many-to-many column="ProductId" class="Product"></many-to-many> </set> when I try to create the database I get the following error : failed: The INSERT statement conflicted with the FOREIGN KEY SAME TABLE constraint "FK9AD976763BF05E2A". The conflict occurred in database "CoderForTraders", table "dbo.Categories", column 'CategoryId'. The statement has been terminated. I looked on the net for some answers but found none. Thanks for your help

    Read the article

  • Reading PowerPoint Effects with VBA

    - by OneNerd
    I have been working with VBA inside PowerPoint, and have a grasp on most things. What I am struggling with is reading the effect/animation settings for each object. I seem to be able to get a lot of what I need through the Powerpoint.ActivePresentation.Slides(slide_id).TimeLine.MainSequence(seq_num) What confuses me is how to convert the numeric value of Powerpoint.ActivePresentation.Slides(slide_id).TimeLine.MainSequence(seq_num).EffectType to an effect (so looking for a table of values to effects or perhaps a CONST table). Also, how to read in all the different level of effects (like entrance, or emphasis, etc) is really confusing. not to mention I cannot wrap my head around the timeline (which seems like it is not really a timeline). Can anyone point me to any good articles or documentation that discusses how to read the effects and animations properly and fully? Thanks.

    Read the article

  • Javascript undefined variable

    - by djairo
    I have a problem with a javascript error: $("#slider") is undefined How can i solve this problem? <script type="text/javascript"> $(document).ready(function() { $("#slider").easySlider({ controlsBefore: '<p id="controls">', controlsAfter: '</p>', prevId: 'prevBtn', nextId: 'nextBtn' }); }); </script> This is my html <div id='slider'> <table> <tr> <td width='325'>hello</td> <td width='325'>hello</td> </table> </div>

    Read the article

  • ASP.NET MVC- Bizarre problem - suddennly lost all LINQTO SQL data context objects

    - by MikeD
    I was making an edit to a long existing project. Specifically I added some fields to a table and had to delete the table from the LINQTOSQL designer and re-add it. doesn't Also had to do the same for a view. Mode some other code changes and went to build . Now my project won't build because it can't resolve any of the data context objects (all tables and views) in my code. I don't know what I did or how this happeened. I have many tables and views in the project's L2S data context so I don't wont to try and do over. Please any suggestions on how to resolve this problem are greatly appreciated. Desparate! The error messages I am getting are the familiar The type or namespace name 'equipment' could not be found (are you missing a using directive or an assembly reference?)

    Read the article

  • problem sybase autogenerated ids

    - by daedlus
    Hi , I have a table in which the PK column Id of type bigint and it is populated automatically increasing order of 1,2,3...so on i notice that some times all of a sudden the ids that are generated are having very big value . for example the ids are like 1,2,3,4,5,500000000000001,500000000000002 there is a huge jump after 5...ids 6 , 7 were not used at all i do perform delete operations on this table but i am absolutely sure that missing ids were not used before. why does this occur and how can i fix this? many thanks for looking in to this. my env: sybase ase 15.0.3 , linux

    Read the article

  • Microsoft Business Intelligence. Is what I am trying to do possible?

    - by Nai
    Hi guys, I have been charged with the task of analysing the log table of my company's website. This table contains a user's click path throughout the website for a given session. My company is looking to understand/spot trends based on the 'click paths' of our users. In doing so, identify groups of users that take on a certain 'click path' based on age/geography and so on. As you can tell from the title, I am completely new to BI and its capabilities so I was wondering: Are our objectives attainable? How should I go about doing this? I am currently reading books online as well as other e-books I have found. All signs seem to suggest this is possible via sequence clustering. Although the exact implementation and tweaks involved are currently lost on me. Therefore, if anyone has first hand experience in such an undertaking, I would be awesome if you could share it here. Cheers!

    Read the article

  • How do you get average of sums in SQL (multi-level aggregation)?

    - by paxdiablo
    I have a simplified table xx as follows: rdate date rtime time rid integer rsub integer rval integer primary key on (rdate,rtime,rid,rsub) and I want to get the average (across all times) of the sums (across all ids) of the values. By way of a sample table, I have (with consecutive identical values blanked out for readability): rdate rtime rid rsub rval ------------------------------------- 2010-01-01 00.00.00 1 1 10 2 20 2 1 30 2 40 01.00.00 1 1 50 2 60 2 1 70 2 80 02.00.00 1 1 90 2 100 2010-01-02 00.00.00 1 1 999 I can get the sums I want with: select rdate,rtime,rid, sum(rval) as rsum from xx where rdate = '2010-06-01' group by rdate,rtime,rid which gives me: rdate rtime rid rsum ------------------------------- 2010-01-01 00.00.00 1 30 (10+20) 2 70 (30+40) 01.00.00 1 110 (50+60) 2 150 (70+80) 02.00.00 1 190 (90+100) as expected. Now what I want is the query that will also average those values across the time dimension, giving me: rdate rtime ravgsum ---------------------------- 2010-01-01 00.00.00 50 ((30+70)/2) 01.00.00 130 ((110+150)/2) 02.00.00 190 ((190)/1) I'm using DB2 for z/OS but I'd prefer standard SQL if possible.

    Read the article

  • sqlite ERROR no such column

    - by Richard
    Hi, Does anyone here have some experience with this error? Only If I use the WHERE clause, I get this error. I use php PDO to get the results. And this is my simple table $sql = "CREATE TABLE samenvatting ( stem_id INTEGER PRIMARY KEY AUTOINCREMENT, poll_id TEXT, stem_waarde_id TEXT, totaal INTEGER )"; $crud->rawQuery($sql); $poll_id = "somepoll"; $records = $crud->rawSelect('SELECT * FROM samenvatting WHERE poll_id=´.$poll_id); pdo abstract class public function conn() { isset($this->username); isset($this->password); if (!$this->db instanceof PDO) { $this->db = new PDO($this->dsn, $this->username, $this->password); $this->db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } } public function rawSelect($sql) { $this->conn(); return $this->db->query($sql); } Thanks, Richard

    Read the article

  • How to implement multi relationship in MS SQL?

    - by Ethan
    I’m trying to design a database to use with ASP.net MVC application. Here is the scenario: There are three entities and users can post their comments for each of these different entities. I just wonder how just put one table for Comments and link all other entities to it. Obviously, Comments table needs 3 references (foreign key) to those tables but as you know these foreign keys can’t be null and just one of them can be filled for each row. Is there any better way than implementing three different tables for each entity’s comments? Cheers Ethan

    Read the article

  • SQL Server Mutliple Joins Taxing CPU

    - by durilai
    I have a stored procedure on SQL server 2005. It is pulling from a Table function, and has two joins. When the query is run using a load test it kills the CPU 100% across all 16 cores! I have determined that removing one of the joins makes the query run fine, but both taxes the CPU. Select SKey From dbo.tfnGetLatest(@ID) a left join [STAGING].dbo.RefSrvc b on a.LID = b.ESIID left join [STAGING].dbo.RefSrvc c on a.EID = c.ESIID Any help is appreciated, note the join is happening on the same table in a different database on the same server.

    Read the article

  • CakePHP Accessing Dynamically Created Tables?

    - by Dave
    As part of a web application users can upload files of data, which generates a new table in a dedicated MySQL database to store the data in. They can then manipulate this data in various ways. The next version of this app is being written in CakePHP, and at the moment I can't figure out how to dynamically assign these tables at runtime. I have the different database config's set up and can create the tables on data upload just fine, but once this is completed I cannot access the new table from the controller as part of the record CRUD actions for the data manipulate. I hoped that it would be along the lines of function controllerAction(){ $this->uses[] = 'newTable'; $data = $this->newTable->find('all'); //use data } But it returns the error Undefined property: ReportsController::$newTable Fatal error: Call to a member function find() on a non-object in /app/controllers/reports_controller.php on line 60 Can anyone help.

    Read the article

  • Multiple Tables or Multiple Schema

    - by Yan Cheng CHEOK
    http://stackoverflow.com/questions/1152405/postgresql-is-better-using-multiple-databases-with-1-schema-each-or-1-database I am new in schema concept for PostgreSQL. For the above mentioned scenario, I was wondering Why don't we use a single database (with default schema named public) Why don't we have a single table, to store multiple users row? Other tables which hold users related information, with foreign key point to the user table. Can anyone provide me a real case scenario, which single database, multiple schema will be extremely useful, and can't solve by conventional single database, single schema.

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Help with iphone dev - beyond bounds error

    - by dusk
    I'm just learning iphone development, so please forgive me for what is probably a beginner error. I've searched around and haven't found anything specific to my problem, so hopefully it is an easy fix. The book has examples of building a table from data hard coded via an array. Unfortunately it never really tells you how to get data from a URL, so I've had to look that up and that is where my problems show up. When I debug in xcode, it seems like the count is correct, so I don't understand why it is going out of bounds? This is the error message: 2010-05-03 12:50:42.705 Simple Table[3310:20b] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray objectAtIndex:]: index (1) beyond bounds (0)' My URL returns the following string: first,second,third,fourth And here is the iphone code with the book's working example commented out #import "Simple_TableViewController.h" @implementation Simple_TableViewController @synthesize listData; - (void)viewDidLoad { /* //working code from book NSArray *array = [[NSArray alloc] initWithObjects:@"Sleepy", @"Sneezy", @"Bashful", @"Bashful", @"Happy", @"Doc", @"Grumpy", @"Thorin", @"Dorin", @"Norin", @"Ori", @"Balin", @"Dwalin", @"Fili", @"Kili", @"Oin", @"Gloin", @"Bifur", @"Bofur", @"Bombur", nil]; self.listData = array; [array release]; */ //code from interwebz that crashes NSString *urlstr = [[NSString alloc] initWithFormat:@"http://www.mysite.com/folder/iphone-test.php"]; NSURL *url = [[NSURL alloc] initWithString:urlstr]; NSString *ans = [NSString stringWithContentsOfURL:url]; NSArray *listItems = [ans componentsSeparatedByString:@","]; self.listData = listItems; [urlstr release]; [url release]; [ans release]; [listItems release]; [super viewDidLoad]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; self.listData = nil; [super viewDidUnload]; } - (void)dealloc { [listData release]; [super dealloc]; } #pragma mark - #pragma mark Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.listData count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *SimpleTableIdentifier = @"SimpleTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:SimpleTableIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:SimpleTableIdentifier] autorelease]; } NSUInteger row = [indexPath row]; cell.textLabel.text = [listData objectAtIndex:row]; return cell; } @end

    Read the article

  • SQL Query in Ruby: Only select the changes

    - by JJ Liu
    Suppose I have a table (PriceHistory) like this, every time I change anything in the row, I will record the whole row again in the table. id | buy_price | sell_price | change_date 1 | 2 | 2 | 2012-06-22 2 | 3 | 2 | 2012-06-20 3 | 2 | 6 | 2012-06-15 4 | 5 | 5 | 2012-06-15 5 | 5 | 7 | 2012-06-15 6 | 4 | 8 | 2012-06-12 I only care about the change of BuyPrice, Is there a way to just select row 1, 2, 3, & 5? Here is the Ruby code I come up with, but it does not only select the changed rows PriceHistory.select("id, BuyPrice, change_date"). order("change_date DESC") Both Ruby and SQL answers are fine.

    Read the article

  • TinyMCE: Double forecolor and backcolor buttons?

    - by petsson
    I'm having a problem using TinyMCE. In IE8, the forecolor and backcolor, for some random reason, is displayed twice. See picture below. Source code (I add the forecolor and backcolor in theme_advanced_buttons2): tinyMCE.init({ mode : "exact", elements : "<%= editArea.ClientID %>", custom_shortcuts : false, language : "en", relative_urls : false, convert_urls : false, forced_root_block : false, force_p_newlines : true, force_br_newlines : false, fix_nesting : true, plugins : "pagebreak,table", pagebreak_separator : '<div style="page-break-after:always;"></div>', theme : "advanced", skin : "o2k7", skin_variant : "blue", width : "540", height : "470", theme_advanced_toolbar_location : "top", theme_advanced_statusbar_location : "none", theme_advanced_font_sizes : "1,2,3,4,5,6,7", font_size_style_values : "0.6em,0.8em,1em,1.2em,1.5em,2em,3em", theme_advanced_buttons1 : "newdocument,|,copy,cut,paste,|,hr,pagebreak,|,undo,redo,|,code|,image,code", theme_advanced_buttons2 : "fontselect,fontsizeselect,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,|,forecolor,backcolor", // <-- This gives me double forecolor and backcolor theme_advanced_buttons3 : "table,|,row_props,cell_props,|,col_before,col_after,row_before,row_after,|,split_cells,merge_cells,|,delete_col,delete_row," });

    Read the article

  • SQL Server - Query Short-Circuiting?

    - by Sam Schutte
    Do T-SQL queries in SQL Server support short-circuiting? For instance, I have a situation where I have two database and I'm comparing data between the two tables to match and copy some info across. In one table, the "ID" field will always have leading zeros (such as "000000001234"), and in the other table, the ID field may or may not have leading zeros (might be "000000001234" or "1234"). So my query to match the two is something like: select * from table1 where table1.ID LIKE '%1234' To speed things up, I'm thinking of adding an OR before the like that just says: table1.ID = table2.ID to handle the case where both ID's have the padded zeros and are equal. Will doing so speed up the query by matching items on the "=" and not evaluating the LIKE for every single row (will it short circuit and skip the LIKE)?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >