Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 337/405 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • iPhone: TableView inside UIScrollview, show vaccillating scrollbars around

    - by karim
    Hi, I have added some table and other vies in a scrollview. scrolling in tables are working fine. But in the parent scrollview, when scrolling, a vacillating vertical scrollbars are shown, sometimes it even come to the middle of the screen. sometime show at the left side of the screen. and not limited to the vertical scrollbar region. When I set tje showsVerticalScrollIndicator = NO, it is not shown. But do you know why the scrollbar is moving around. DashboardView is a subclass of UIScrollView. dashboard=[[DashboardView alloc] initWithFrame:fullScreenRect]; dashboard.contentSize = CGSizeMake(320,700); // must do! dashboard.showsVerticalScrollIndicator = YES; dashboard.bounces = YES; self.view = dashboard; @implementation DashboardView (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } (void)drawRect:(CGRect)rect { // Drawing code } (void) layoutSubviews{ NSArray *views = self.subviews; [UIView beginAnimations:@"CollapseExpand" context:nil]; [UIView setAnimationDuration:0.5]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; UIView *view = [views objectAtIndex: 0]; CGRect rect = view.frame; for (int i = 1; i < [views count]; i++ ) { view = [views objectAtIndex:i]; view.frame = CGRectMake(rect.origin.x, rect.origin.y + rect.size.height, view.frame.size.width, view.frame.size.height); rect = view.frame; } [UIView commitAnimations]; }

    Read the article

  • How do i write this jpql query?

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • Reading,Writing, Editing BLOBS through DataTables and DataRows

    - by Soham
    Consider this piece of code: DataSet ds = new DataSet(); SQLiteDataAdapter Da = new SQLiteDataAdapter(Command); Da.Fill(ds); DataTable dt = ds.Tables[0]; bool PositionExists; if (dt.Rows.Count > 0) { PositionExists = true; } else { PositionExists = false; } if (PositionExists) { //dt.Rows[0].Field<>("Date") } Here the "Date" field is a BLOB. My question is, a. Will reading through the DataAdapter create any problems later on, when I am working with BLOBS? More, specifically, will it read the BLOB properly? b. This was the read part.Now when I am writing the BLOB to the DB,its a queue actually. I.e I am trying to store a queue in MySQLite using a BLOB. Will Conn.ExecuteNonQuery() serve my purpose? c. When I am reading the BLOB back from the DB, can I edit it as the original datatype, it used to be in C# environment i.e { Queue - BLOB - ? } {C# -MySQL - C# } So in this context, Date field was a queue, I wrote it back as a BLOB, when reading it back, can I access it(and edit) as a queue? Thank You. Soham

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • Ruby or Rails reporting tools?

    - by Anushank
    I am looking for a report generation tool in ruby or rails which allows the user to define a template, then fetch data into the created template. I have been looking through "The Ruby Box: reporting section." There are two reporting tools I have looked at: Thin Reports: It is really good. You can create your own report template with the template editor. Then you can produce PDF reports using thinreports gems. ODF Report: You can create a template ODF file using Open Office and MS Word, and you can use that template to generate the report. Both of these solutions lack the ability to draw charts. Does anyone know of similar reporting tools that can draw charts within a given report? I have tried the RTF Ruby Library. It works, but shares the limitation that it cannot draw charts and graphs. The minimum requirements are: Able to create customizable templates. (e.g. design layout, set font size, color, embed images etc.) Able to draw tables and charts. Template could be in Docx or excel or xml or any other common file format. Report output report must be in Docx or RTF format. Thanks

    Read the article

  • How do I get syncdb db_table and app_label to play nicely together

    - by Chris Heisel
    I've got a model that looks something like this: class HeiselFoo(models.Model): title = models.CharField(max_length=250) class Meta: """ Meta """ app_label = "Foos" db_table = u"medley_heiselfoo_heiselfoo" And whenever I run my test suite, I get an error because Django isn't creating the tables for that model. It appears to be an interaction between app_label and db_table -- as the test suite runs normally if db_table is set, but app_label isn't. Here's a link to the full source code: http://github.com/cmheisel/heiselfoo Here's the traceback from the test suite: E ====================================================================== ERROR: test_truth (heiselfoo.tests.HeiselFooTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/heiselfoo/tests.py", line 10, in test_truth f.save() File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 434, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 527, in save_base result = manager._insert(values, return_id=update_pk, using=using) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/manager.py", line 195, in _insert return insert_query(self.model, values, **kwargs) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/query.py", line 1479, in insert_query return query.get_compiler(using=using).execute_sql(return_id) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 783, in execute_sql cursor = super(SQLInsertCompiler, self).execute_sql(None) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 727, in execute_sql cursor.execute(sql, params) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 200, in execute return Database.Cursor.execute(self, query, params) DatabaseError: no such table: medley_heiselfoo_heiselfoo ---------------------------------------------------------------------- Ran 1 test in 0.004s FAILED (errors=1) Creating test database 'default'... No fixtures found. medley_heiselfoo_heiselfoo Destroying test database 'default'...

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • How to force ADO.Net to use only the System.String DataType in the readers TableSchema.

    - by Keith Sirmons
    Howdy, I am using an OleDbConnection to query an Excel 2007 Spreadsheet. I want force the OleDbDataReader to use only string as the column datatype. The system is looking at the first 8 rows of data and inferring the data type to be Double. The problem is that on row 9 I have a string in that column and the OleDbDataReader is returning a Null value since it could not be cast to a Double. I have used these connection strings: Provider=Microsoft.ACE.OLEDB.12.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 12.0;IMEX=1;HDR=No" Provider=Microsoft.Jet.OLEDB.4.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 8.0;HDR=No;IMEX=1" Looking at the reader.GetSchemaTable().Rows[7].ItemArray[5], it's dataType is Double. Row 7 in this schema correlates with the specific column in Excel I am having issues with. ItemArray[5] is its DataType column Is it possible to create a custom TableSchema for the reader so when accessing the ExcelFiles, I can treat all cells as text instead of letting the system attempt to infer the datatype? I found some good info at this page: Tips for reading Excel spreadsheets using ADO.NET The main quirk about the ADO.NET interface is how datatypes are handled. (You'll notice I've been carefully avoiding the question of which datatypes are returned when reading the spreadsheet.) Are you ready for this? ADO.NET scans the first 8 rows of data, and based on that guesses the datatype for each column. Then it attempts to coerce all data from that column to that datatype, returning NULL whenever the coercion fails! Thank you, Keith Here is a reduced version of my code: using (OleDbConnection connection = new OleDbConnection(BuildConnectionString(dataMapper).ToString())) { connection.Open(); using (OleDbCommand cmd = new OleDbCommand()) { cmd.Connection = connection; cmd.CommandText = SELECT * from [Sheet1$]; using (OleDbDataReader reader = cmd.ExecuteReader()) { using (DataTable dataTable = new DataTable("TestTable")) { dataTable.Load(reader); base.SourceDataSet.Tables.Add(dataTable); } } } }

    Read the article

  • Parsing SOAP XML in Oracle

    - by user258587
    Hi I am new to Oracle and I am working on something that needs to parse a SOAP request and save the address to DB Tables. I am using the XML parser in Oracle (XMLType) with XPath but am struggling since I can't figure out the way to parse the SOAP request because it has multiple namespaces. Could anyone give me an example? Thanks in advance!!! edit It would be a typical SOAP request similar to the one below. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://soap.service.****.com"> <soapenv:Header /> <soapenv:Body> <soap:UpdateElem> <soap:request> <soap:att1>123456789</soap:att1> <soap:att2 xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" /> <soap:att3>L</soap:att3> ..... </soap:request> </soap:UpdateElem> </soapenv:Body> </soapenv:Envelope> I need to retrieve parameters att1, att2... and save them in to a DB table.

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • Nhibernate , collections and compositeid

    - by Ciaran
    Hi, banging my head here and thought that some one out there might be able to help. Have Tables below. Bucket( bucketId smallint (PK) name varchar(50) ) BucketUser( UserId varchar(10) (PK) bucketId smallint (PK) ) The composite key is not the problem thats ok I know how to get around this but I want my bucket class to contanin a IList of BucketUser. I read the online reference and thought that I had cracked it but havent. The two mappings are below -- bucket -- <id name="BucketId" column="BucketId" type="Int16" unsaved-value="0"> <generator class="native"/> </id> <property column="BucketName" type="String" name="BucketName"/> <bag name="Users" table="BucketUser" inverse="true" generic="true" lazy="true"> <key> <column name="BucketId" sql-type="smallint"/> <column name="UserId" sql-type="varchar"/> </key> <one-to-many class="Bucket,Impact.Dice.Core" not-found="ignore"/> </bag> -- bucketUser --

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • how to write this typical mysql query( ho to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Templates vs. coded HTML

    - by Alan Harris-Reid
    I have a web-app consisting of some html forms for maintaining some tables (SQlite, with CherryPy for web-server stuff). First I did it entirely 'the Python way', and generated html strings via. code, with common headers, footers, etc. defined as functions in a separate module. I also like the idea of templates, so I tried Jinja2, which I find quite developer-friendly. In the beginning I thought templates were the way to go, but that was when pages were simple. Once .css and .js files were introduced (not necessarily in the same folder as the .html files), and an ever-increasing number of {{...}} variables and {%...%} commands were introduced, things started getting messy at design-time, even though they looked great at run-time. Things got even more difficult when I needed additional javascript in the or sections. As far as I can see, the main advantages of using templates are: Non-dynamic elements of page can easily be viewed in browser during design. Except for {} placeholders, html is kept separate from python code. If your company has a web-page designer, they can still design without knowing Python. while some disadvantages are: {{}} delimiters visible when viewed at design-time in browser Associated .css and .js files have to be in same folder to see effects in browser at design-time. Data, variables, lists, etc., must be prepared in advanced and either declared globally or passed as parameters to render() function. So - when to use 'hard-coded' HTML, and when to use templates? I am not sure of the best way to go, so I would be interested to hear other developers' views. TIA, Alan

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • Using NHibernate's HQL to make a query with multiple inner joins

    - by Abu Dhabi
    The problem here consists of translating a statement written in LINQ to SQL syntax into the equivalent for NHibernate. The LINQ to SQL code looks like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; It's essentially trying to grab a list of posts within a given forum thread. The best I've been able to come up with (with the help of the helpful users of this site) for NHibernate is: var whatevervar = session.CreateQuery("select t.Thread_topic, p.Post_time, " + "u.User_display_name, u.User_signature, " + "u.User_avatar, p.Post_body, p.Post_topic " + "from THREADPOST tp " + "inner join tp.Thread_ as t " + "inner join tp.Post_ as p " + "inner join p.User_ as u " + "where tp.Thread_ = :what") .SetParameter<THREAD>("what", threadid) .SetResultTransformer(Transformers.AliasToBean(typeof(MyDTO))) .List<MyDTO>(); But that doesn't parse well, complaining that the aliases for the joined tables are null references. MyDTO is a custom type for the output: public class MyDTO { public string thread_topic { get; set; } public DateTime post_time { get; set; } public string user_display_name { get; set; } public string user_signature { get; set; } public string user_avatar { get; set; } public string post_topic { get; set; } public string post_body { get; set; } } I'm out of ideas, and while doing this by direct SQL query is possible, I'd like to do it properly, without defeating the purpose of using an ORM. Thanks in advance!

    Read the article

  • [NSFetchedResultsController sections] returns nil?

    - by Chris
    Hi Everyone, I am trying to resolve this for days at this stage and I'm hoping you can help. I have two ViewControllers which query two different tables from the same database using Core Data. The first ViewController is opened with the app and displays fine. The second is called from within the first ViewController, using a pretty standard fetch setup: - (NSFetchedResultsController *)fetchedClients { // Set up the fetched results controller if needed. if (fetchedClients == nil) { // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Clients" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"clientsName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedClients = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedClients; } When I call [self.fetchedClients sections], I get a nil (0x0) return. I have examined the database using an external application to ensure data exists in the "Clients" table. Can anyone think of a reason why [self.fetchedClients sections] would return nil? Many thanks for any help you can provide. Regards, Chris

    Read the article

  • vsts load test datasource issues

    - by ashish.s
    Hello, I have a simple test using vsts load test that is using datasource. The connection string for the source is as follows <connectionStrings> <add name="MyExcelConn" connectionString="Driver={Microsoft Excel Driver (*.xls)};Dsn=Excel Files;dbq=loginusers.xls;defaultdir=.;driverid=790;maxbuffersize=4096;pagetimeout=20;ReadOnly=False" providerName="System.Data.Odbc" /> </connectionStrings> the datasource configuration is as follows and i am getting following error estError TestError 1,000 The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. ERROR [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. I wrote a test, just to check if i could create an odbc connection would work and that works the test is as follows [TestMethod] public void TestExcelFile() { string connString = ConfigurationManager.ConnectionStrings["MyExcelConn"].ConnectionString; using (OdbcConnection con = new OdbcConnection(connString)) { con.Open(); System.Data.Odbc.OdbcCommand objCmd = new OdbcCommand("SELECT * FROM [loginusers$]"); objCmd.Connection = con; OdbcDataAdapter adapter = new OdbcDataAdapter(objCmd); DataSet ds = new DataSet(); adapter.Fill(ds); Assert.IsTrue(ds.Tables[0].Rows.Count > 1); } } any ideas ?

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • How to keep track of a private messaging system using MongoDB?

    - by luckytaxi
    Take facebook's private messaging system where you have to keep track of sender and receiver along w/ the message content. If I were using MySQL I would have multiple tables, but with MongoDB I'll try to avoid all that. I'm trying to come up with a "good" schema that can scale and is easy to maintain. If I were using mysql, I would have a separate table to reference the user and and message. See below ... profiles table user_id first_name last_name message table message_id message_body time_stamp user_message_ref table user_id (FK) message_id (FK) is_sender (boolean) With the schema listed above, I can query for any messages that "Bob" may have regardless if he's the recipient or sender. Now how to turn that into a schema that works with MongoDB. I'm thinking I'll have a separate collection to hold the messages. Problem is, how can I differentiate between the sender and the recipient? If Bob logs in, what do I query against? Depending on whether Bob initiated the email, I don't want to have to query against "sender" and "receiver" just to see if the message belongs to the user.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >