Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 89/286 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Zend Table Relationship Modeling with Composite Key

    - by emeraldjava
    I have a table with a composite primary key using four columns. mysql> describe leaguesummary; +------------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+------------------+------+-----+---------+----------------+ | leagueid | int(10) unsigned | NO | PRI | NULL | auto_increment | | leaguetype | enum('I','T') | NO | PRI | NULL | | | leagueparticipantid | int(10) unsigned | NO | PRI | NULL | | | leaguestandard | int(10) unsigned | NO | | NULL | | | leaguedivision | varchar(5) | NO | PRI | NULL | | | leagueposition | int(10) unsigned | NO | | NULL | | I have the league object modelled as so (all plain enough mappings) <?php class Model_DbTable_League extends Zend_Db_Table_Abstract { protected $_name = 'league'; protected $_primary = 'id'; protected $_dependentTables = array('Model_DbTable_LeagueSummary'); And I've started like this on the new model class. I've mapped a simple reference map which returns all rows linked to the league id. // http://files.zend.com/help/Zend-Framework/zend.db.table.relationships.html // http://naneau.nl/2007/04/21/a-zend-framework-tutorial-part-one/ class Model_DbTable_LeagueSummary extends Zend_Db_Table_Abstract { protected $_name = "leaguesummary"; protected $_primary = array('leagueid', 'leaguetype','leagueparticipantid','leaguedivision'); protected $_referenceMap = array( 'Summary' => array( 'columns' => array('leagueid'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array('id') ), ..... ); } ?> The simple case works when called from my controller public function listAction() { // action body $leagueTable = new Model_DbTable_League(); $this->view->leagues = $leagueTable->getLeagues(); $league = $leagueTable->getLeague(6); // work $summary = $league->findDependentRowset('Model_DbTable_LeagueSummary','Summary'); Zend_Debug::dump($summary,"",true); I'm not sure how i can define extra _referenceMap keys which will take extra contraint ket values. I would like to be able to define a set called 'MenA' in which the type and division values are hardcoded, and the league id is taken from the initial rowset. 'MenA' =>array( 'columns' => array('leagueid','leaguetype','leaguedivision'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array("id","I","A") ) Is this style of mapping possible ie hardcoding the values into the 'refColumns'. The second crazy idea i had was to pass the variable values in as part of the third param of the findDependentRowset() method. $menA = $league->findDependentRowset('Model_DbTable_LeagueSummary','MenA',array("I","A")); Any suggestions on how I might use the Zend DB Table Relationship mapping correctly to do this would be appreciated. I'm not interested in the plain, old and ugly $db-select(a,b,c)-where(..) style solution.

    Read the article

  • Replication - User defined table type not propogating to subscriber

    - by Aamod Thakur
    I created a User defined table type named tvp_Shipment with two columns (id and name) . generated a snapshot and the User defined table type was properly propogated to all the subscribers. I was using this tvp in a stored procedure and everything worked fine. Then I wanted to add one more column created_date to this table valued parameter.I dropped the stored procedure (from replication too) and also i dropped and recreated the User defined table type with 3 columns and then recreated the stored procedure and enabled it for publication When i generate a new snapshot, the changes in user defined table type are not propogated to the subscriber. The newly added column was not added to the subscription. the Error messages: The schema script 'usp_InsertAirSa95c0e23_218.sch' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001) Get help: http://help/MSSQL_REPL-2147201001 Invalid column name 'created_date'. (Source: MSSQLServer, Error number: 207) Get help: http://help/207

    Read the article

  • Populate DataGridViewComboBoxColumn runtime

    - by ghiboz
    Hi all! I have this problem: I have a datagridview that reads the data from a db and I wish, for an integer column use a combobox to choose some values... I modified the column using DataGridViewComboBoxColumn type and after, on the init of the form this: DataTable dt = new DataTable("dtControlType"); dt.Columns.Add("f_Id"); dt.Columns.Add("f_Desc"); dt.Rows.Add(0, "none"); dt.Rows.Add(1, "value 1"); dt.Rows.Add(2, "value 2"); dt.Rows.Add(3, "value 3"); pControlType.DataSource = dt; pControlType.DataPropertyName = "pControlType"; pControlType.DisplayMember = "f_Desc"; pControlType.ValueMember = "f_Id"; but when the program starts (after this code) this message appears:

    Read the article

  • Counting the most tagged tag with MySQL

    - by Jack W-H
    Hi folks My problem is that I'm trying to count which tag has been used most in a table of user-submitted code. But the problem is with the database structure. The current query I'm using is this: SELECT tag1, COUNT(tag1) AS counttag FROM code GROUP BY tag1 ORDER BY counttag DESC LIMIT 1 This is fine, except, it only counts the most often occurence of tag1 - and my database has 5 tags per post - so there's columns tag1, tag2, tag3, tag4, tag5. How do I get the highest occurring tag value from all 5 columns in one query? Jack

    Read the article

  • Migrate a Django project from MySQL to Oracle

    - by pablo
    Hi, I have a Django1.1 project that works with a legacy MySQL db. I'm trying to migrate this project to Oracle (xe and 11g). We have two options for the migration: - Use SQL developer to create a migration sql script. - Use Django fixtures. The schema created with the sql script from sql developer doesn't match the schema created from syncdb. For example, Django expects TIMESTAMP columns while sql developer creates DATE columns. Using syncdb with Django fixtures could be great but when trying to load the MySQL fixtures into Oracle, after using syncdb, I'm getting: IntegrityError: ORA-00001: unique constraint (USER.SYS_C004253) violated How can I find what part create the integrity error? Thanks

    Read the article

  • CSS float column extending over footer

    - by JD
    Hi, I am having a problem with my CSS whereby the right hand column in a 2 column layout is extending beyond the footer. I have tried playing with the clear: both; property but I cannot get it to work.. the second column has the id column2 both columns use the class column. The footer html has the id footerWrapper Both columns and footer are div tags. My CSS (abridged): .column { width: 49%; } #column2 { width: 50%; position: absolute; top: 0px; margin-left: 50%; float: left; } #footerWrapper { background-color: #333333; border-top: 2px #FF6600 solid; color: #666; }

    Read the article

  • How to copy newly added document with metadata to another document library?

    - by James
    I need to copy the item that user just added (for example. myresume.doc or financial.xls) with the metadata (doc lib obtains the columns from content type, ct obtains the columns from site colum) and copy the item with metadata in a folder called "NativeFile". Every doc library has this folder. I know itemadded can be used but then I heard itemadded fires before user have a chance to complete the metadata for the item they just added. What are my options? (new to sp, so some sample code would greatly help. or some good link similar to this issue) Sharepoint 2007, itemadded or itemadding or itemupdating or itemupdated....

    Read the article

  • SQL Query Help Part 2 - Add filter to joined tables and get max value from filter

    - by Seth
    I asked this question on SO. However, I wish to extend it further. I would like to find the max value of the 'Reading' column only where the 'state' is of value 'XX' for example. So if I join the two tables, how do I get the row with max(Reading) value from the result set. Eg. SELECT s.*, g1.* FROM Schools AS s JOIN Grades AS g1 ON g1.id_schools = s.id WHERE s.state = 'SA' // how do I get row with max(Reading) column from this result set The table details are: Table1 = Schools Columns: id(PK), state(nvchar(100)), schoolname Table2 = Grades Columns: id(PK), id_schools(FK), Year, Reading, Writing...

    Read the article

  • PL/SQL bulk collect into associative array with sparse key

    - by Dan
    I want to execute a SQL query inside PL/SQL and populate the results into an associative array, where one of the columns in the SQL becomes the key in the associative array. For example, say I have a table Person with columns PERSON_ID INTEGER PRIMARY KEY PERSON_NAME VARCHAR2(50) ...and values like: PERSON_ID | PERSON_NAME ------------------------ 6 | Alice 15 | Bob 1234 | Carol I want to bulk collect this table into a TABLE OF VARCHAR2(50) INDEX BY INTEGER such that the key 6 in this associative array has the value Alice and so on. Can this be done in PL/SQL? If so, how?

    Read the article

  • Extjs - Getting more from the server

    - by fatnjazzy
    Hi, Is it possible to get every data from the server? for example, i want to get the columns items from the server Via Ajax/Proxy by sending json string? thanks var grid = new Ext.grid.GridPanel({ store: store, columns: [ {id:'company',header: 'Company', width: 160, sortable: true, dataIndex: 'company'}, {header: 'Price', width: 75, sortable: true, renderer: 'usMoney', dataIndex: 'price'}, {header: 'Change', width: 75, sortable: true, renderer: change, dataIndex: 'change'}, {header: '% Change', width: 75, sortable: true, renderer: pctChange, dataIndex: 'pctChange'}, {header: 'Last Updated', width: 85, sortable: true, renderer: Ext.util.Format.dateRenderer('m/d/Y'), dataIndex: 'lastChange'} ], stripeRows: true, autoExpandColumn: 'company', height: 350, width: 600, title: 'Array Grid', stateful: true, stateId: 'grid' });

    Read the article

  • Building a QueryExpression where name field is either A or B

    - by Mike
    I'm trying to build a Dynamics CRM 4 query so that I can get calendar events that are named either "Event A" or "Event B". A QueryByAttribute doesn't seem to do the job as I cannot specify a condition where the field called "event_name" = "Event A" of "event_name" = "Event B". When using the QueryExpression, I've found the FilterExpression applies to the Referencing Entity. I don't know if the FilterExpression can be used on the Referenced Entity at all. The example below is something like what I want to achieve, though this would return an empty result set as it will go looking in the entity called "my_event_response" for a "name" attribute. It's starting to look like I will need to run several queries to get this but this is less efficient than if I can submit it all at once. ColumnSet columns = new ColumnSet(); columns.Attributes = new string[]{ "event_name", "eventid", "startdate", "city" }; ConditionExpression eventname1 = new ConditionExpression(); eventname1.AttributeName = "event_name"; eventname1.Operator = ConditionOperator.Equal; eventname1.Values = new string[] { "Event A" }; ConditionExpression eventname2 = new ConditionExpression(); eventname2.AttributeName = "event_name"; eventname2.Operator = ConditionOperator.Equal; eventname2.Values = new string[] { "Event B" }; FilterExpression filter = new FilterExpression(); filter.FilterOperator = LogicalOperator.Or; filter.Conditions = new ConditionExpression[] { eventname1, eventname2 }; LinkEntity link = new LinkEntity(); link.LinkCriteria = filter; link.LinkFromEntityName = "my_event"; link.LinkFromAttributeName = "eventid"; link.LinkToEntityName = "my_event_response"; link.LinkToAttributeName = "eventid"; QueryExpression query = new QueryExpression(); query.ColumnSet = columns; query.EntityName = EntityName.mbs_event.ToString(); query.LinkEntities = new LinkEntity[] { link }; RetrieveMultipleRequest request = new RetrieveMultipleRequest(); request.Query = query; return (RetrieveMultipleResponse)crmService.Execute(request); I'd appreciate some advice on how to get the data I need.

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Feature activate via UI but does not show in the Libraries and Lists

    - by Justin Cullen
    I am really hating SharePoint as there are hardly any good/concrete documentation. I developed custom List "MainCatalog" with few columns (not site columns). Create features and elements with MOSS feature builder at Site collection level so scope="site" installed via stsadm activated via UI "went to site collection website", Site Setting Site collection Feature (and saw my custom list "MainCatalog") and was able to activate. then went to "mySiteCollection Site Settings Site Libraries and Lists " My list is showing But it shows in the "mySiteCollection Create Custom Lists "MainCatalog" I guess it's showing there as a template... But my intention is to deploy this list from development to test environment. EXTREMELY STRESSED. I AM ON THIS FOR LAST 8 DAYS.....

    Read the article

  • ASP.NET Repeater and DataBinder.Eval

    - by Fernando
    I've got a <asp:Repeater> in my webpage, which is bound to a programatically created dataset. The purpose of this repeater is to create an index from A-Z, which, when clicked, refreshes the information on the page. The repeater has a link button like so: <asp:LinkButton ID="indexLetter" Text='<%#DataBinder.Eval(Container.DataItem,"letter")%>' runat="server" CssClass='<%#DataBinder.Eval(Container.DataItem, "cssclass")%>' Enabled='<%#DataBinder.Eval(Container.DataItem,"enabled")%>'></asp:LinkButton> The dataset is created the following way: protected DataSet getIndex(String index) { DataSet ds = new DataSet(); ds.Tables.Add("index"); ds.Tables["index"].Columns.Add("letter"); ds.Tables["index"].Columns.Add("cssclass"); ds.Tables["index"].Columns.Add("enabled"); char alphaStart = Char.Parse("A"); char alphaEnd = Char.Parse("Z"); for (char i = alphaStart; i <= alphaEnd; i++) { String cssclass="", enabled="true"; if (index == i.ToString()) { cssclass = "selected"; enabled = "false"; } ds.Tables["index"].Rows.Add(new Object[3] {i.ToString(),cssclass,enabled }); } return ds; } However, when I run the page, a "Specified cast is not valid exception" is thrown in Text='<%#DataBinder.Eval(Container.DataItem,"letter")'. I have no idea why, I have tried manually casting to String with (String), I've tried a ToString() method, I've tried everything. Also, if in the debugger I add a watch for DataBinder.Eval(Container.DataItem,"letter"), the value it returns is "A", which according to me, should be fine for the Text Property. EDIT: Here is the exception: System.InvalidCastException was unhandled by user code Message="Specified cast is not valid." Source="App_Web_cmu9mtyc" StackTrace: at ASP.savecondition_aspx._DataBinding_control7(Object sender, EventArgs e) in e:\Documents and Settings\Fernando\My Documents\Visual Studio 2008\Projects\mediTrack\mediTrack\saveCondition.aspx:line 45 at System.Web.UI.Control.OnDataBinding(EventArgs e) at System.Web.UI.Control.DataBind(Boolean raiseOnDataBinding) at System.Web.UI.Control.DataBind() at System.Web.UI.Control.DataBindChildren() InnerException: Any advice will be greatly appreciated, thank you EDIT 2: Fixed! The problem was not in the Text or CSS tags, but in the Enabled tag, I had to cast it to a Boolean value. The problem was that the exception was signaled at the Text tag, I don't know why

    Read the article

  • wpf style converter : "Convert" called by every datagrid column using it

    - by Sonic Soul
    I created a converter, and assigned it to a style. than i assigned that style, to the columns i want affected. as rows are added, and while stepping through debugger, i noticed that the converter convert method gets called 1 time per column (each time it is used). is there a way to optimize it better, so that it gets called only once and all columns using it get the same value? <Style x:Key="ConditionalColorStyle" TargetType="{x:Type DataGridCell}" BasedOn="{StaticResource CellStyle}"> <Setter Property="Foreground"> <Setter.Value> <Binding> <Binding.Converter> <local:ConditionalColorConverter /> </Binding.Converter> </Binding> </Setter.Value> </Setter> </Style>

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • How do I select differing rows in two MySQL tables with the same structure?

    - by chiborg
    I have two tables, A and B, that have the same structure (about 30+ fields). Is there a short, elegant way to join these tables and only select rows where one or more columns differ? I could certainly write some script that creates the query with all the column names but maybe there is an SQL-only solution. To put it another way: Is there a short substitute to this: SELECT * FROM table_a a JOIN table_b b ON a.pkey=b.pkey WHERE a.col1 != b.col2 OR a.col2 != b.col2 OR a.col3 != b.col3 # .. repeat for 30 columns

    Read the article

  • Reportviewer - Print report - matrix report expanding horizontally to multiple pages, avoid repeating row header

    - by abc0077
    I am generating a matrix report dynamically using report viewer version 9.0 in visual studio 2010. If i export save and print then it works normally. If there more columns report expands horizontally across mutiple pages. If i give Print option available in the toolbar of the report viewer. The columns are moved to second page which is correct but the row headers (product 1, product 2) should not be repeated in the second page.In the second page report should have all details except the product information. I am also interested to know whether it is the functionality of this version report viewer.

    Read the article

  • SSIS - Can I get the column schema for a flat file source from a database?

    - by Steve Clement
    We receive a nightly data export from a vendor in the form of about 10 tab-delimited flat file without column headers. In addition, the vendor provides us with the SQL scripts for the database tables so that we can import the files into our system. Unfortunately, the vendor recently changed the schema for the flat files. Each file has upwards 150 columns, and having to go through the DB schema and adjust column types on a Flat File Data Source in SSIS is extremely time consuming, not to mention a royal pain. Since I know the file data layout in the database schema, is there any way I can dynamically pull that into a Flat File source to set the columns correctly? Or am I just stuck with manually setting everything?

    Read the article

  • replacing data.frame element-wise operations with data.table (that used rowname)

    - by Harold
    So lets say I have the following data.frames: df1 <- data.frame(y = 1:10, z = rnorm(10), row.names = letters[1:10]) df2 <- data.frame(y = c(rep(2, 5), rep(5, 5)), z = rnorm(10), row.names = letters[1:10]) And perhaps the "equivalent" data.tables: dt1 <- data.table(x = rownames(df1), df1, key = 'x') dt2 <- data.table(x = rownames(df2), df2, key = 'x') If I want to do element-wise operations between df1 and df2, they look something like dfRes <- df1 / df2 And rownames() is preserved: R> head(dfRes) y z a 0.5 3.1405463 b 1.0 1.2925200 c 1.5 1.4137930 d 2.0 -0.5532855 e 2.5 -0.0998303 f 1.2 -1.6236294 My poor understanding of data.table says the same operation should look like this: dtRes <- dt1[, !'x', with = F] / dt2[, !'x', with = F] dtRes[, x := dt1[,x,]] setkey(dtRes, x) (setkey optional) Is there a more data.table-esque way of doing this? As a slightly related aside, more generally, I would have other columns such as factors in each data.table and I would like to omit those columns while doing the element-wise operations, but still have them in the result. Does this make sense? Thanks!

    Read the article

  • sql server 2005 - return single row when 2 records in right table

    - by Peanut
    Hi, I have two related sql server tables ... TableA and TableB. ***TableA - Columns*** TableA_ID INT VALUE VARCHAR(100) ***TableB - Columns*** TableB_ID INT TableA_ID INT VALUE VARCHAR(100) For every single record in TableA there are always 2 records in TableB. Therefore TableA has a one-to-many relationship with TableB. How could I write a single sql statement to join these tables and return a single row for each row in TableA that includes: a column for the VALUE column in the first related row in table B a column for the VALUE column in the second related row in table B? Thanks.

    Read the article

  • How do I correctly use two Not Exists statements in a where clause using Access SQL VBA?

    - by Bryan
    I have 3 Tables: NotHeard,analyzed,analyzed2. In each of these tables I have two columns named UnitID and Address. What I'm trying to do right now is to select all of the records for the columns UnitID and Address from NotHeard that don't appear in either analyzed or analyzed2. The SQL statement I created was as follows: SELECT UnitID, Address INTO [NotHeardByEither] FROM [NotHeard] Where NOT EXISTS( Select analyzed.UnitID FROM analyzed WHERE [NotHeard].UnitID = analyzed.UnitID) or NOT EXISTS( Select analyzed2.UnitID FROM analyzed2 WHERE [NotHeard].UnitID = analyzed2.UnitID) Group BY UnitID, Address I thought this would work since I've used the single NOT EXISTS subquery line and it has worked just fine for me in the past. The above query however returns the same data that is in the NotHeard table whereas if I take out the or NOT EXISTS part it works correctly. Any ideas as to what I'm doing wrong or how to do what I'm wanting to do?

    Read the article

  • Determining Excel spreadsheet format before Data Flow Task

    - by Josh Larsen
    I'm working on an SSIS package which uses a for each loop to iterate through excel files in a directory and a data flow task to import them. The issue I'm having is that the project manager I'm working with doesn't think the users will always follow the structure. So if a file is in the folder and the package tries to import it but the spreadsheet is missing columns or has extra columns it generates and error of course. Even though I have the task set to not fail the package; the package does indeed fail and then the other files aren't imported. So, I'm wondering what is the easiest way to either determine the spreadsheet is incorrectly formatted, or stop the error from failing the package execution? After taking said step I would just use a file copy task to move the file to a "Failure" folder. Then continue on processing the spreadsheets.

    Read the article

  • How to pass record id into hyperlink in radgrid?

    - by mongoose_za
    I've a radgrid and rendering a hyperlink column. I want to pass the id of the record into the url for the hyperlink. How can I do this? I have this <Columns> <telerik:GridTemplateColumn AllowFiltering="false" HeaderText="Edit" UniqueName="Edit"> <ItemTemplate> <asp:HyperLink ID="HyperLink1" runat="server" Target="_blank" NavigateUrl="~/Edit.aspx?Id=need_to_bind_id_here">Edit Details</asp:HyperLink> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> There is an ID column which is generated too.

    Read the article

  • how to have separate keys per record in mongo_mapper + Rails

    - by Vitaly Kushner
    When I'm adding a record in mongodb I can specify whatever keys I want and it will store it in the db. The problem is that it will remember those keys for the next time I insert another record. so for example if I do the following: Product.create :foo => 123 and then Product.create :bar => 456 I get :foo => nil field in the 2nd record. This is definitely not a limitation of mongodb itself, since if I restart the rails console and create yet another record with different set of columns, it will not add the columns from the 1st 2 records. So it seems like mongomapper remembers all the keys used and inserts them all into all records, even if values are not provided. The question is obviously: how do I disable this crazy attributes explosion? Basically I want only the 'permanent' keys that I specify in the model to be in every record, but all the 'extra' attributes to be specified per record and not to mess the consequent records.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >