Search Results

Search found 10424 results on 417 pages for 'persisted column'.

Page 366/417 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Complete failure to compile when include CSS Friendly Adapters

    - by david
    Background - I am trying to use the friendly adapters to override the default styling for the standard asp.net menu control that is used by an existing project. The existing project functions normally and compiles when requested without incident. Adding in the code for the for the CSS Friendly adapter and not only does it not compile, but it never even really starts. The Problem in Detail - I am using the sample code from Scott on this page: http://weblogs.asp.net/scottgu/archive/2006/09/08/CSS-Control-Adapter-Toolkit-Update.aspx. The sample project compiles fine, just within the existing project does it fail. Fails without a line number or any other traceable info. It definately appears to be related to the CSSMenuAdapter.browser file, which has been referenced by others online as the cause of similar error. I have tried addind and readding, using as a dll, using as a code file in app code, etc. I am working with aspdotnetstorefront in this case, although it is not unique to them as I have found other references in software packages online. Only thing is, no one ever says what solved the issue. I am using Windows 7, VS2008 Express and SQL Express 2008 R2. The full error msg is: Error 10 Exception of type 'System.OutOfMemoryException' was thrown. Notice that there is no file, line, or column info. Really need some help here. I have been working on this a long time. This really should have tag: cssfriendlyadapter but I could not create that.

    Read the article

  • Nested ListBox in DataGrid? Inherit style?

    - by Chris
    I have a DataGrid with multiple columns. The data grid has a style that changes the forecolor of text on a row where the mouse is over or the row has been selected. So the text color will change from black to white, for example. In one of the columns in the data grid, I have a ListBox. Is it possible for the items in the list box to have the foreground change to that of the data grid row, when you do mouse over or select the data grid row? I don't want to have a style for the list box that is specific to mouse over for the list items, I just want the foreground of the list items to change automatically to the forground of the data grid row when the mouse is over the row or selected the row. So even if the user moves their mouse over a different column (that doesn't contain the listbox) - I would want the foreground for the listbox to change. How can I go about doing this? ValueConverter? Thanks. Chris

    Read the article

  • Dynamically add columns to a listbox

    - by mgroves
    I'm brand new to Windows Phone 7 development, and almost as equally new to Silverlight. I have a ListBox with a DataTemplate, StackPanel, and TextBlocks like so: <ListBox Height="355" HorizontalAlignment="Left" Margin="6,291,0,0" Name="detailsList" VerticalAlignment="Top" Width="474" Background="#36FFFFFF"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Width="50" Text="{Binding Ticker}" /> <TextBlock Width="50" Text="{Binding Price}" /> <TextBlock Width="50" Text="{Binding GainLoss}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> I have some C# code to populate it: var info = new List<StockInfo>(); info.Add(new StockInfo { Ticker = "C", Price = "5.18", GainLoss = "10" }); info.Add(new StockInfo { Ticker = "WEN", Price = "4.18", GainLoss = "12" }); info.Add(new StockInfo { Ticker = "CBB", Price = "5.22", GainLoss = "210" }); detailsList.ItemsSource = info; And that all works fine. My question is: how do I add/remove additional 'textblocks' to the listbox dynamically (at runtime)? Also, how do I put column headers on the list box?

    Read the article

  • [Cocoa] Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but is extremely patch. On occasion, clicking to edit a row will cause its value to nullify. On other occasions, the value you enter on one row will populate other rows within the table. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • register device at run time

    - by user177893
    In the App ID section of the Program Portal, locate the App ID you wish to use with the Apple Push Notification service. Only App IDs with a specific bundle ID can be used with the APNs. You cannot use a “wild-card” application ID. You must see “Available” under the Apple Push Notification service column to register this App ID and configure a certificate for this App ID. Click the ‘Configure’ link next to your desired App ID. In the Configure App ID page, check the Enable Push Notification Services box and click the Configure button. Clicking this button launches the APNs Assistant, which guides you through the next series of steps that create your App ID specific Client SSL certificate. Download the Client SSL certificate file to your download location. Navigate to that location and double-click the certificate file (which has an extension of cer) to install it in your keychain. When you are finished, click Done in the APNS Assistant. Double-clicking the file launches Keychain Access. Make sure you install the certificate in your login keychain on the computer you are using for provider development. The APNs SSL certificate should be installed on your notification server. When you finish these steps you are returned to the Configure App ID page of the iPhone Dev Center portal. The certificate should be badged with a green circle and the label “Enabled”. To complete the APNs set-up process, you will need to create a new provisioning profile containing your APNs-enabled App ID. IS it posssible to do theses steps through code.

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Core Data @sum aggregate

    - by nasim
    I am getting an exception when I try to get @sum on a column in iPhone Core-Data application. My two models are following - Task model: @interface Task : NSManagedObject { } @property (nonatomic, retain) NSString * taskName; @property (nonatomic, retain) NSSet* completion; @end @interface Task (CoreDataGeneratedAccessors) - (void)addCompletionObject:(NSManagedObject *)value; - (void)removeCompletionObject:(NSManagedObject *)value; - (void)addCompletion:(NSSet *)value; - (void)removeCompletion:(NSSet *)value; @end Completion model: @interface Completion : NSManagedObject { } @property (nonatomic, retain) NSNumber * percentage; @property (nonatomic, retain) NSDate * time; @property (nonatomic, retain) Task * task; @end And here is the fetch: NSFetchRequest *request = [[NSFetchRequest alloc] init]; request.entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:context]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"taskName" ascending:YES]; request.sortDescriptors = [NSArray arrayWithObject:sortDescriptor]; NSError *error; NSArray *results = [context executeFetchRequest:request error:&error]; NSArray *parents = [results valueForKeyPath:@"taskName"]; NSArray *children = [results valueForKeyPath:@"[email protected]"]; NSLog(@"%@ %@", parents, children); [request release]; [sortDescriptor release]; The exception is thrown at the fourth line from bottom. The thrown exception is: *** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30' I would very much appreciate any kind of help. Thanks.

    Read the article

  • CakePHP - hasMany not fetching?

    - by Paolo Bergantino
    Maybe I am just having a slow day, but for the life of me I can't figure out why this is happening. I haven't done CakePHP in a while and I am trying to use the 1.3 version, but this doesn't seem to be working... I have two models: area.php <?php class Area extends AppModel { var $name = 'Area'; var $useTable = 'OR_AREA'; var $primaryKey = 'A_ID'; var $belongsTo = array( 'Building' => array( 'className' => 'Building', 'foreignKey' => 'FK_B_ID', ), 'Facility' => array( 'className' => 'Facility', 'foreignKey' => 'FK_F_ID', ), 'System' => array( 'className' => 'System', 'foreignKey' => 'FK_S_ID', ) ); } ?> building.php <?php class Building extends AppModel { var $name = 'Building'; var $useTable = 'OR_BLDG'; var $primaryKey = 'B_ID'; var $hasMany = array( 'Area' => array( 'className' => 'Area', 'foreignKey' => 'FK_B_ID', ) ); } ?> OR_AREA has a column titled FK_B_ID that refers to the B_ID. If I run something like: $this->Building->find('all', array('recursive' => 2)); I get empty [Area] arrays for all the Buildings even though there are plenty of Areas in the OR_AREA table that are associated to buildings. Not only that, the Query Table doesn't even show CakePHP attempted to find anything but all the records in OR_BLDG. All the more puzzling, if I do: $this->Area->find('all'); I get all the Areas and the [Building] arrays are populated when appropriate. What am I missing?

    Read the article

  • MySQL: Complex Join Statement involving two tables and a third correlation table

    - by Stephen
    I have two tables that were built for two disparate systems. I have records in one table (called "leads") that represent customers, and records in another table (called "manager") that are the exact same customers but "manager" uses different fields (For example, "leads" contains an email address, and "manager" contains two fields for two different emails--either of which might be the email from "leads"). So, I've created a correlation table that contains the lead_id and manager_id. currently this correlation table is empty. I'm trying to query the "leads" table to give me records that match either "manager" email field with the single "leads" email field, while at the same time ignoring fields that have already been added to the "correlated" table. (this way I can see how many leads that match have not yet been correlated.) Here's my current, invalid SQL attempt: SELECT leads.id, manager.id FROM leads, manager LEFT OUTER JOIN correlation ON correlation.lead_id = leads.id WHERE correlation.id IS NULL AND leads.project != "someproject" AND (manager.orig_email = leads.email OR manager.dest_email = leads.email) AND leads.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-10 23:59:59' ORDER BY leads.created ASC; I get the error: Unknown column 'leads.id' in 'on clause' Before you wonder: there are records in the "leads" table where leads.project != "someproject" and leads.created falls between those dates. I've included those additional parameters for completeness.

    Read the article

  • How to extract a 2x2 submatrix from a bigger matrix

    - by ZaZu
    Hello, I am a very basic user and do not know much about commands used in C, so please bear with me...I cant use very complicated codes. I have some knowledge in the stdio.h and ctype.h library, but thats about it. I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a specific 2 by 2 submatrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for row and 3 for column, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Not only this isnt a 2 by 2 submatrix, but even if I wanted the first 3 rows and first 3 columns, its not printing it correctly.... I need to start at row 3 and col 3, then take the 2 by 2 submatrix ! I should have ended up with : 4.00 23.00 352.00 6.00 I heard that I can use fgets and sscanf to accomplish this. Here is my trial code : fgets(garbage,1,fin); sscanf(garbage,"\n"); But this doesnt work either :( What am I doing wrong ? Please help. Thanks !

    Read the article

  • Issue in parsing the GridViewRows in a Telerik RadGridView

    - by cricketmovies
    Hi, I would like to do something similar what we do in ASP.NET where we parse through all the rows in a GridView and assign a particular value to a particular cell in a row which has a matching TaskId as the current Id. This has to happen in a Tick function of a Dispatcher Timer object. Since I have a Start Timer button Column for every row in a GridView. Upon a particular row's Start Timer Button click, I have to start its timer and display in a cell in that row. Similarly there can be multiple timers running in parallel. For this I need to be able to check the task Id of the particular task and keep updating the cell values with the updated time in all of the tasks that have a Timer Started. TimeSpan TimeRemaining = somevalue; string CurrentTaskId = "100"; foreach(GridViewRow row in RadGridView1.Rows) // Here I tried RadGridView1.ChildrenOfType() as well but it has null { if( (row.DataContext as Task).TaskId == CurrentTaskId ) row.Cells[2].Content = a.TaskTimeRemaining.ToString(); } Can someone please let me know how do I get this functionality using the Telerik RadGridView? Cheers, Syed.

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Combining aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using using a domain language to query the DB. The 'language' comprises of boolean expressions (or more specifically SQL like criteria) which I then 'translate' back into pure (ANSI) SQL and send to the underlying Db. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • rails: has_many :through validation?

    - by ramonrails
    Rails 2.1.0 (Cannot upgrade for now due to several constraints) I am trying to achieve this. Any hints? A project has many users through join model A user has many projects through join model Admin class inherits User class. It also has some Admin specific stuff. Admin like inheritance for Supervisor and Operator Project has one Admin, One supervisor and many operators. Now I want to 1. submit data for project, admin, supervisor and operator in a single project form 2. validate all and show errors on the project form. Project has_many :projects_users ; has_many :users, :through => :projects_users User has_many :projects_users ; has_many :projects, :through => :projects_users ProjectsUser = :id integer, :user_id :integer, :project_id :integer, :user_type :string ProjectUser belongs_to :project, belongs_to :user Admin < User # User has 'type:string' column for STI Supervisor < User Operator < User Is the approach correct? Any and all suggestions are welcome.

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Show last 4 table entries mysql php

    - by user272899
    I have a movie database Kind of like a blog and I want to display the last 4 created entries. I have a column in my table for timestamp called 'dateadded'. Using this code how would I only display the 4 most recent entries to table <?php //connect to database mysql_connect($mysql_hostname,$mysql_user,$mysql_password); @mysql_select_db($mysql_database) or die("<b>Unable to connect to specified database</b>"); //query databae $query = "SELECT * FROM movielist"; $result=mysql_query($query) or die('Error, insert query failed'); $row=0; $numrows=mysql_num_rows($result); while($row<$numrows) { $id=mysql_result($result,$row,"id"); $imgurl=mysql_result($result,$row,"imgurl"); $imdburl=mysql_result($result,$row,"imdburl"); ?> <div class="moviebox rounded"><a href="http://<?php echo $domain; ?>/viewmovie?movieid=<?php echo $id; ?>" rel="facebox"> <img src="<?php echo $imgurl; ?>" /> <form method="get" action=""> <input type="text" name="link" class="link" style="display:none" value="http://us.imdb.com/Title?<?php echo $imdburl; ?>"/> </form> </a></div> <?php $row++; } ?>

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

  • DataGrid: dynamic DataTemplate for dynamic DataGridTemplateColumn

    - by Lukas Cenovsky
    I want to show data in a datagrid where the data is a collection of public class Thing { public string Foo { get; set; } public string Bar { get; set; } public List<Candidate> Candidates { get; set; } } public class Candidate { public string FirstName { get; set; } public string LastName { get; set; } ... } where the number of candidates in Candidates list varies at runtime. Desired grid layout looks like this Foo | Bar | Candidate 1 | Candidate 2 | ... | Candidate N I'd like to have a DataTemplate for each Candidate as I plan changing it during runtime - user can choose what info about candidate is displayed in different columns (candidate is just an example, I have different object). That means I also want to change the column templates in runtime although this can be achieved by one big template and collapsing its parts. I know about two ways how to achieve my goals (both quite similar): Use AutoGeneratingColumn event and create Candidates columns Add Columns manually In both cases I need to load the DataTemplate from string with XamlReader. Before that I have to edit the string to change the binding to wanted Candidate. Is there a better way how to create a DataGrid with unknown number of DataGridTemplateColumn? Note: This question is based on dynamic datatemplate with valueconverter

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Google App Engine Needs Index Error

    - by Andrew Johnson
    I am currently getting a needs index error on my app engine app: http://www.gaiagps.com/wiki/home. I believe this index should have been created automatically by my index.yaml file (see below). Googling a bit, I think I just need to wait for my index to be built. Is this correct, or do I need to do something manually? Is there some sort of index-building queue? My tables are very, very small right now. EDIT: I added the line "indexes:" to my app.yaml, and now app engine reports the index is building, so I think this is fixed. It's weird that this file was wrong considering I've never touched it. indexes: # AUTOGENERATED # This index.yaml is automatically updated whenever the dev_appserver # detects that a new type of query is run. If you want to manage the # index.yaml file manually, remove the above marker line (the line # saying "# AUTOGENERATED"). If you want to manage some indexes # manually, move them above the marker line. The index.yaml file is # automatically uploaded to the admin console when you next deploy # your application using appcfg.py. - kind: Revision properties: - name: name - name: created The app works on my dev server, but not in production. However, on my dev console, I have noticed this error (EDIT: THIS ERROR IS GONE NOW THAT I ADDED indexes: to the app.yaml file above): ERROR 2009-10-18 04:46:51,908 dev_appserver_index.py:176] Error parsing /gaiagps.com/index.yaml: 'NoneType' object is not callable in "<string>", line 13, column 3: - kind: Revision ^

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Association and model data saving problem

    - by Zhlobopotam
    Developing with cakephp 1.3 (latest from github). There are 2 models bind with hasAndBelongsToMany: documents and tags. Document can have many tags in other words. I've add a new document submitting form there user can enter a list of tags separated with commas (new tag will be added, if not exist already). I looked at cakephp bakery 2.0 source code on github and found the solution. But it seems that something is wrong. class Document extends AppModel { public $hasAndBelongsToMany = array('Tag'); public function beforeSave($options = array()) { if (isset($this->data[$this->alias]['tags']) && !empty($this- >data[$this->alias]['tags'])) { $tagIds = $this->Tag->saveDocTags($this->data[$this->alias] ['tags']); unset($this->data[$this->alias]['tags']); $this->data[$this->Tag->alias][$this->Tag->alias] = $tagIds; } return true; } } class Tag extends AppModel { public $hasAndBelongsToMany = array ('Document'); public function saveDocTags($commalist = '') { if ($commalist == '') return null; $tags = explode(',',$commalist); if (empty($tags)) return null; $existing = $this->find('all', array( 'conditions' => array('title' => $tags) )); $return = Set::extract($existing,'/Tag/id'); if (sizeof($existing) == sizeof($tags)) { return $return; } $existing = Set::extract($existing,'/Tag/title'); foreach ($tags as $tag) { if (!in_array($tag, $existing)) { $this->create(array('title' => $tag)); $this->save(); $return[] = $this->id; } } return $return; } } So, new tags creation works well but document model can't save association data and tells: SQL Error: 1054: Unknown column 'Array' in 'field list' Query: INSERT INTO documents (title, content, shortnfo, date, status) VALUES ('Document with tags', '', '', Array, 1) Any ideas how to solve this problem?

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >