Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 282/345 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • How to keep track of a private messaging system using MongoDB?

    - by luckytaxi
    Take facebook's private messaging system where you have to keep track of sender and receiver along w/ the message content. If I were using MySQL I would have multiple tables, but with MongoDB I'll try to avoid all that. I'm trying to come up with a "good" schema that can scale and is easy to maintain. If I were using mysql, I would have a separate table to reference the user and and message. See below ... profiles table user_id first_name last_name message table message_id message_body time_stamp user_message_ref table user_id (FK) message_id (FK) is_sender (boolean) With the schema listed above, I can query for any messages that "Bob" may have regardless if he's the recipient or sender. Now how to turn that into a schema that works with MongoDB. I'm thinking I'll have a separate collection to hold the messages. Problem is, how can I differentiate between the sender and the recipient? If Bob logs in, what do I query against? Depending on whether Bob initiated the email, I don't want to have to query against "sender" and "receiver" just to see if the message belongs to the user.

    Read the article

  • vsts load test datasource issues

    - by ashish.s
    Hello, I have a simple test using vsts load test that is using datasource. The connection string for the source is as follows <connectionStrings> <add name="MyExcelConn" connectionString="Driver={Microsoft Excel Driver (*.xls)};Dsn=Excel Files;dbq=loginusers.xls;defaultdir=.;driverid=790;maxbuffersize=4096;pagetimeout=20;ReadOnly=False" providerName="System.Data.Odbc" /> </connectionStrings> the datasource configuration is as follows and i am getting following error estError TestError 1,000 The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. ERROR [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. I wrote a test, just to check if i could create an odbc connection would work and that works the test is as follows [TestMethod] public void TestExcelFile() { string connString = ConfigurationManager.ConnectionStrings["MyExcelConn"].ConnectionString; using (OdbcConnection con = new OdbcConnection(connString)) { con.Open(); System.Data.Odbc.OdbcCommand objCmd = new OdbcCommand("SELECT * FROM [loginusers$]"); objCmd.Connection = con; OdbcDataAdapter adapter = new OdbcDataAdapter(objCmd); DataSet ds = new DataSet(); adapter.Fill(ds); Assert.IsTrue(ds.Tables[0].Rows.Count > 1); } } any ideas ?

    Read the article

  • Excel Reader ASP.NET

    - by user304429
    I declared a DataGrid in a ASP.NET View and I'd like to generate some C# code to populate said DataGrid with an Excel spreadsheet (.xlsx). Here's the code I have: <asp:DataGrid id="DataGrid1" runat="server"/> <script language="C#" runat="server"> protected void Page_Load(object sender, EventArgs e) { string connString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\FileName.xlsx;Extended Properties=""Excel 12.0;HDR=YES;"""; // Create the connection object OleDbConnection oledbConn = new OleDbConnection(connString); try { // Open connection oledbConn.Open(); // Create OleDbCommand object and select data from worksheet Sheet1 OleDbCommand cmd = new OleDbCommand("SELECT * FROM [sheetname$]", oledbConn); // Create new OleDbDataAdapter OleDbDataAdapter oleda = new OleDbDataAdapter(); oleda.SelectCommand = cmd; // Create a DataSet which will hold the data extracted from the worksheet. DataSet ds = new DataSet(); // Fill the DataSet from the data extracted from the worksheet. oleda.Fill(ds, "Something"); // Bind the data to the GridView DataGrid1.DataSource = ds.Tables[0].DefaultView; DataGrid1.DataBind(); } catch { } finally { // Close connection oledbConn.Close(); } } </script> When I run the website, nothing really happens. What gives?

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • Nhibernate , collections and compositeid

    - by Ciaran
    Hi, banging my head here and thought that some one out there might be able to help. Have Tables below. Bucket( bucketId smallint (PK) name varchar(50) ) BucketUser( UserId varchar(10) (PK) bucketId smallint (PK) ) The composite key is not the problem thats ok I know how to get around this but I want my bucket class to contanin a IList of BucketUser. I read the online reference and thought that I had cracked it but havent. The two mappings are below -- bucket -- <id name="BucketId" column="BucketId" type="Int16" unsaved-value="0"> <generator class="native"/> </id> <property column="BucketName" type="String" name="BucketName"/> <bag name="Users" table="BucketUser" inverse="true" generic="true" lazy="true"> <key> <column name="BucketId" sql-type="smallint"/> <column name="UserId" sql-type="varchar"/> </key> <one-to-many class="Bucket,Impact.Dice.Core" not-found="ignore"/> </bag> -- bucketUser --

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • Sharepoint BDC Error: The title property of entity tblStaff is set to an invalid value

    - by Christopher Rathermel
    I am just starting to create our Business Data Catalog(s) for our practice management system and I am running into an issue w/ our staff table. Background: I am using Business Data Catalog Definition Editor to create my ADF. I am using the RevertToSelf Authentication Mode. I have tried a few other tables and they seem to work just fine thus far.. only issue is w/ the staff table. If I removed all the columns for the staff entity except the ID and a few columns for the name it actually works. So it has a problem w/ one of my columns in tblStaff. I receive this error even when I set up an ADF w/ just this one entity. So w/ no associations.. When attempting to view the record: http://servername/ssp/admin/Content/tblstaff.aspx?StaffID={0} w/ {0} replaced w/ an actual staff ID I get the following error: The title property of entity tblStaff is set to an invalid value. Things I have tried: I noticed that I do have a column in my staff table called "Title" and removed it from ADF w/ no luck... Same error.. I tried to use bdc meta man to create my ADF and I got the same error... Any ideas? Chris

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • GWT Html Layout Conventions

    - by brad
    I've just started working with GWT and I'm already recognizing the extraordinary power that it possesses. I'm coming from a frontend world so the Java is a big learning curve, but I think that will actually help me build a properly laid out app (html-wise) instead of just relying on the default GWT panels that often end up using tables for layout, or superfluous, absolutely positioned divs. The biggest thing slowing me down right now however is deciding how to properly lay out the design of my site. I've got a pretty standard 2-col header/foot site (fixed width) that I want to design, but I'm not a fan of all the extra divs/styling etc that come with the DockLayoutPanel for instance. I'm thinking that I should just write my own Layout widget extending Composite that has HTMLPanels for the general site layout (I think... still haven't fully figured that out yet, ie. how do I add ID's to these panel divs "#header", "#nav" etc...) then I can add other widgets into this layout But the other thing I'm seeing is that I could write a Layout class extending UiBuilder and have straight up divs in the ui.xml file. I'm just wondering, what is the preferred method for site layout with GWT? This isn't going to be re-used in the sense of other widgets, it will be used once and my controls etc will be placed inside. Any tips or tricks are greatly appreciated! And if I've completely missed the boat on how to do this, let me know

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • ActiveRecord table inheritence using set_table_names

    - by Jinyoung Kim
    Hi, I'm using ActiveRecord in Ruby on Rails. I have a table named documents(Document class) and I want to have another table data_documents(DataDocument) class which is effectively the same except for having different table name. In other words, I want two tables with the same behavior except for table name. class DataDocument < Document #set_table_name "data_documents" self.table_name = "data_documents" end My solution was to use class inheritance as above, yet this resulted in inconsistent SQL statement for create operation where there are both 'documents' table and 'data_documents' table. Can you figure out why and how I can make it work? >> DataDocument.create(:did=>"dd") ActiveRecord::StatementInvalid: Mysql::Error: Unknown column 'data_documents.did' in 'where clause': SELECT `documents`.id FROM `documents` WHERE (`data_documents`.`did` = BINARY 'dd') LIMIT 1 from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:320:in `execute' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:595:in `select' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:62:in `select_all'

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Understanding evaluation of expressions containing '++' and '->' operators in C.

    - by Leif Ericson
    Consider this example: struct { int num; } s, *ps; s.num = 0; ps = &s; ++ps->num; printf("%d", s.num); /* Prints 1 */ It prints 1. So I understand that it is because according to operators precedence, -> is higher than ++, so the value ps->num (which is 0) is firstly fetched and then the ++ operator operates on it, so it increments it to 1. struct { int num; } s, *ps; s.num = 0; ps = &s; ps++->num; printf("%d", s.num); /* Prints 0 */ In this example I get 0 and I don't understand why; the explanation of the first example should be the same for this example. But it seems that this expression is evaluated as follows: At first, the operator ++ operates, and it operates on ps, so it increments it to the next struct. Only then -> operates and it does nothing because it just fetches the num field of the next struct and does nothing with it. But it contradicts the precedence of operators, which says that -> have higher precedence than ++. Can someone explain this behavior? Edit: After reading two answers which refer to a C++ precedence tables which indicate that a prefix ++/-- operators have lower precedence than ->, I did some googling and came up with this link that states that this rule applies also to C itself. It fits exactly and fully explains this behavior, but I must add that the table in this link contradicts a table in my own copy of K&R ANSI C. So if you have suggestions as to which source is correct I would like to know. Thanks.

    Read the article

  • [NSFetchedResultsController sections] returns nil?

    - by Chris
    Hi Everyone, I am trying to resolve this for days at this stage and I'm hoping you can help. I have two ViewControllers which query two different tables from the same database using Core Data. The first ViewController is opened with the app and displays fine. The second is called from within the first ViewController, using a pretty standard fetch setup: - (NSFetchedResultsController *)fetchedClients { // Set up the fetched results controller if needed. if (fetchedClients == nil) { // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Clients" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"clientsName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedClients = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedClients; } When I call [self.fetchedClients sections], I get a nil (0x0) return. I have examined the database using an external application to ensure data exists in the "Clients" table. Can anyone think of a reason why [self.fetchedClients sections] would return nil? Many thanks for any help you can provide. Regards, Chris

    Read the article

  • Duplicate column name by JPA with @ElementCollection and @Inheritance

    - by gerry
    I've created the following scenario: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class MyEntity implements Serializable{ @Id @GeneratedValue protected Long id; ... @ElementCollection @CollectionTable(name="ENTITY_PARAMS") @MapKeyColumn (name = "ENTITY_KEY") @Column(name = "ENTITY_VALUE") protected Map<String, String> parameters; ... } As well as: @javax.persistence.Entity public class Sensor extends MyEntity{ @Id @GeneratedValue protected Long id; ... // so here "protected Map<String, String> parameters;" is inherited !!!! ... } So running this example, no tables are created and i get the following message: WARNUNG: Got SQLException executing statement "CREATE TABLE ENTITY_PARAMS (Entity_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255), ENTITY_KEY VARCHAR(255), Sensor_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255))": com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column name 'ENTITY_VALUE' I also tried overriding the attributes on the Sensor class... @AttributeOverrides({ @AttributeOverride(name = "ENTITY_KEY", column = @Column(name = "SENSOR_KEY")), @AttributeOverride(name = "ENTITY_VALUE", column = @Column(name = "SENSOR_VALUE")) }) ... but the same error. Can anybody help me?

    Read the article

  • MySQL & PHP - select/option lists and showing data to users that still allows me to generate queries

    - by Andrew Heath
    Sorry for the unclear title, an example will clear things up: TABLE: Scenario_victories ID scenid timestamp userid side playdate 1 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 2 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 3 RtBr001 2010-03-15 17:13:51 7 2 2010-03-10 ID and timestamp are auto-insertions by the database when the other 4 fields are added. The first thing to note is that a user can record multiple playings of the same scenario (scenid) on the same date (playdate) possibly with the same outcome (side = winner). Hence the need for the unique ID and timestamps for good measure. Now, on their user page, I'm displaying their recorded play history in a <select><option>... list form with 2 buttons at the end - Delete Record and Go to Scenario My script takes the scenid and after hitting a few other tables returns with something more user-friendly like: (playdate) (from scenid) (from side) ######################################################### # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Soviet Union won # ######################################################### [Delete Record] [Go To Scenario] in HTML: <select name="history" size=3> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Soviet Union won</option> </select> Now, if you were to highlight the first record and click Go to Scenario there is enough information there for me to parse it and produce the exact scenario you want to see. However, if you were to select Delete Record there is not - I have the playdate and I can parse the scenid and side from what's listed, but in this example all three records would have the same result. I appear to have painted myself into a corner. Does anyone have a suggestion as to how I can get some unique identifying data (ID and/or timestamp) to ride along on this form without showing it to the user? PHP-only please, I must be NoScript compliant!

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

  • Full text index requires dropping and recreating - why?

    - by Amjid Qureshi
    Hi all, So I've got a web app running on .net 3.5 connected to a SQL 2005 box. We do scheduled releases every 2 weeks. About 14 tables out of 250 are full text indexed. After not every release, but a few too many, the indexes crap out. They seem to have data in there, but when we try to search them from the front end or SQL enterprise we get timeouts/hangs. We have a script that disables the indexes, drops them, deletes the catalog and then re creates the indexes. This fixes the problem 99 times out of 100. and the one other time, we run the script again and it all works We have tried just rebuilding the fulltext index but that doesn't fix the issue. My question is why do we have to do this ? what can we do to sort the index out? Here is a bit of the script, IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) ALTER FULLTEXT INDEX ON [dbo].[Address] DISABLE GO IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) DROP FULLTEXT INDEX ON [dbo].[Address] GO IF EXISTS (SELECT * FROM sysfulltextcatalogs ftc WHERE ftc.name = N'DbName.FullTextCatalog') DROP FULLTEXT CATALOG [DbName.FullTextCatalog] GO -- may need this line if we get an error BACKUP LOG SMS2 WITH TRUNCATE_ONLY CREATE FULLTEXT CATALOG [DbName.FullTextCatalog] ON FILEGROUP [FullTextCatalogs] IN PATH N'F:\Data' AS DEFAULT AUTHORIZATION [dbo] CREATE FULLTEXT INDEX ON [Address](CommonPlace LANGUAGE 'ENGLISH') KEY INDEX PK_Address ON [DbName.FullTextCatalog] WITH CHANGE_TRACKING AUTO go

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >