Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 538/838 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Rails Fixtures Don't Seem to Support Transitive Associations

    - by Rick Wayne
    So I've got 3 models in my app, connected by HABTM relations: class Member < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :software_instances end class SoftwareInstance < ActiveRecord::Base end And I have the appropriate join tables, and use foxy fixtures to load them: -- members.yml -- rick: name: Rick groups: cals -- groups.yml -- cals: name: CALS software_instances: delphi -- software_instances.yml -- delphi: name: No, this is not a joke, someone in our group is still maintaining Delphi apps... And I do a rake db:fixtures:load, and examine the tables. Yep, the join tables groups_members and groups_software_instances have appropriate IDs in, big numbers generated by hashing the fixture names. Yay! And if I run script/console, I can look at rick's groups and see that cals is in there, or cals's software_instances and delphi shows up. (Likewise delphi knows that it's in the group cals.) Yay! But if I look at rick.groups[0].software_instances...nada. []. And, oddly enough, rick.groups[0].id is something like 30 or 10, not 398398439. I've tried swapping around where the association is defined in the fixtures, e.g. -- members.yml -- rick name: rick -- groups.yml -- cals name: CALS members: rick No error, but same result. (Later: Confirmed, same behavior on MySQL. In fact I get the same thing if I take the foxification out and use ERB to directly create the join tables, so: -- license_groups_software_instances -- cals_delphi: group_id: <%= Fixtures.identify 'cals' % software_instance_id: <%= Fixtures.identify 'delphi' $ Before I chuck it all, reject foxy fixtures and all who sail in her and just hand-code IDs...anybody got a suggestion as to why this is happening? I'm currently working in Rails 2.3.2 with sqlite3, this occurs on both OS X and Ubuntu (and I think with MySQL too). TIA, my compadres.

    Read the article

  • Strange - Clicking Update button doesn’t cause a postback due to <!-- tag

    - by AspOnMyNet
    If I define the following template inside DetailsView, then upon clicking an Update or Insert button, the page is posted back to the server: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> </EditItemTemplate> If I remove CompareValidator control from the above code by simply deleting it, then page still gets posted back.But if instead I remove CompareValidator control by enclosing it within <!-- --> tags, then for some reason clicking an Update or Insert button doesn’t cause a postback...instead nothing happens: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <!-- <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> --> </EditItemTemplate> </EditItemTemplate> Any idea why page doesn't get posted back? thanx

    Read the article

  • Conversion failed when converting datetime from character string. Linq To SQL & OpenXML

    - by chobo2
    Hi I been following this tutorial on how to do a linq to sql batch insert. http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx However I have a datetime field in my database and I keep getting this error. System.Data.SqlClient.SqlException was unhandled Message="Conversion failed when converting datetime from character string." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=7 Number=241 Procedure="spTEST_InsertXMLTEST_TEST" Server="" State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) I am not sure why when I just take the datetime in the generated xml file and manually copy it into sql server 2005 it has no problem with it and converts it just fine. This is my SP CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO UserTable(CreateDate) SELECT XMLProdTable.CreateDate FROM OPENXML(@hDoc, 'ArrayOfUserTable/UserTable', 2) WITH ( CreateDate datetime ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code using (TestDataContext db = new TestDataContext()) { UserTable[] testRecords = new UserTable[1]; for (int count = 0; count < 1; count++) { UserTable testRecord = new UserTable() { CreateDate = DateTime.Now }; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(UserTable[])); serializer.Serialize(sWriter, testRecords); db.spTEST_InsertXMLTEST_TEST(sBuilder.ToString()); } Rendered XML Doc <?xml version="1.0" encoding="utf-16"?> <ArrayOfUserTable xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <UserTable> <CreateDate>2010-05-19T19:35:54.9339251-07:00</CreateDate> </UserTable> </ArrayOfUserTable>

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Using NSURLRequest to pass key-value pairs to PHP script with POST

    - by Ralf Mclaren
    Hi, I'm fairly new to objective-c, and am looking to pass a number of key-value pairs to a PHP script using POST. I'm using the following code but the data just doesn't seem to be getting posted through. I tried sending stuff through using NSData as well, but neither seem to be working. NSDictionary* data = [NSDictionary dictionaryWithObjectsAndKeys: @"bob", @"sender", @"aaron", @"rcpt", @"hi there", @"message", nil]; NSURL *url = [NSURL URLWithString:@"http://myserver.com/script.php"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request setHTTPMethod:@"POST"]; [request setHTTPBody:[NSData dataWithBytes:data length:[data count]]]; NSURLResponse *response; NSError *err; NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&err]; NSLog(@"responseData: %@", content); This is getting sent to this simple script to perform a db insert: <?php $sender = $_POST['sender']; $rcpt = $_POST['rcpt']; $message = $_POST['message']; //script variables include ("vars.php"); $con = mysql_connect($host, $user, $pass); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("mydb", $con); mysql_query("INSERT INTO php_test (SENDER, RCPT, MESSAGE) VALUES ($sender, $rcpt, $message)"); echo "complete" ?> Any ideas?

    Read the article

  • Informix, NHibernate, TransactionScope interaction difficulties

    - by John Prideaux
    I have a small program that is trying to wrap an NHibernate insert into an Informix database in a TransactionScope object using the Informix .NET Provider. I am getting the error specified below. The code without the TransactionScope object works -- including when the insert is wrapped in an NHibernate session transaction. Any ideas on what the problem is? BTW, without the EnterpriseServicesInterop, the Informix .NET Provider will not participate in a TransactionScope transaction (verified without NHibernate involved). Code Snippet: public static void TestTScope() { Employee johnp = new Employee { name = "John Prideaux" }; using (TransactionScope tscope = new TransactionScope( TransactionScopeOption.Required, new TransactionOptions() { Timeout = new TimeSpan(0, 1, 0), IsolationLevel = IsolationLevel.ReadCommitted }, EnterpriseServicesInteropOption.Full)) { using (ISession session = OpenSession()) { session.Save(johnp); Console.WriteLine("Saved John to the database"); } } Console.WriteLine("Transaction should be rolled back"); } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static ISessionFactory factory; Stack Trace: NHibernate.ADOException was unhandled Message="Could not close IBM.Data.Informix.IfxConnection connection" Source="NHibernate" StackTrace: at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Connection.DriverConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Release() at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) at NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) at NHibernate.Cfg.Configuration.BuildSessionFactory() at HelloNHibernate.Employee.OpenSession() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 73 at HelloNHibernate.Employee.TestTScope() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 53 at HelloNHibernate.Program.Main(String[] args) in D:\Development\ScratchProject\HelloNHibernate\Program.cs:line 19 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: IBM.Data.Informix.IfxException Message="ERROR - no error information available" Source="IBM.Data.Informix" ErrorCode=-2147467259 StackTrace: at IBM.Data.Informix.IfxConnection.HandleError(IntPtr hHandle, SQL_HANDLE hType, RETCODE retcode) at IBM.Data.Informix.IfxConnection.DisposeClose() at IBM.Data.Informix.IfxConnection.Close() at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) InnerException:

    Read the article

  • (1 2 3 . #<void>)- heapsort

    - by superguay
    Hello everybody: I tried to implement a "pairing heap" with all the regular operations (merge, delete-min etc.), then I've been requested to write a function that would sort a list using my newly constructed heap implementation. Unfortunately it seems that someting goes wrong... Here's the relevant code: (define (heap-merge h1 h2) (cond ((heap-empty? h1) h2) ((heap-empty? h2) h1) (else (let ((min1 (heap-get-min h1)) (min2 (heap-get-min h2))) (if ((heap-get-less h1) min1 min2) (make-pairing-heap (heap-get-less h1) min1 (cons h2 (heap-get-subheaps h1))) (make-pairing-heap (heap-get-less h1) min2 (cons h1 (heap-get-subheaps h2)))))))) (define (heap-insert element h) (heap-merge (make-pairing-heap (heap-get-less h) element '()) h)) (define (heap-delete-min h) (define (merge-in-pairs less? subheaps) (cond ((null? subheaps) (make-heap less?)) ((null? (cdr subheaps)) (car subheaps)) (else (heap-merge (heap-merge (car subheaps) (cadr subheaps)) (merge-in-pairs less? (cddr subheaps)))))) (if (heap-empty? h) (error "expected pairing-heap for first argument, got an empty heap ") (merge-in-pairs (heap-get-less h) (heap-get-subheaps h)))) (define (heapsort l less?) (let aux ((h (accumulate heap-insert (make-heap less?) l))) (if (not (heap-empty? h)) (cons (heap-get-min h) (aux (heap-delete-min h)))))) And these are some selectors that may help you to understand the code: (define (make-pairing-heap less? min subheaps) (cons less? (cons min subheaps))) (define (make-heap less?) (cons less? '())) (define (heap-get-less h) (car h)) (define (heap-empty? h) (if (null? (cdr h)) #t #f)) Now lets get to the problem: When i run 'heapsort' it returns the sorted list with "void", as you can see: (heapsort (list 1 2 3) <) (1 2 3 . #)..HOW CAN I FIX IT? Regards, Superguay

    Read the article

  • Default class for SQLAlchemy single table inheritance

    - by eclaird
    I've set up a single table inheritance, but I need a "default" class to use when an unknown polymorphic identity is encountered. The database is not in my control and so the data can be pretty much anything. A working example setup: import sqlalchemy as sa from sqlalchemy import orm engine = sa.create_engine('sqlite://') metadata = sa.MetaData(bind=engine) table = sa.Table('example_types', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('type', sa.Integer), ) metadata.create_all() class BaseType(object): pass class TypeA(BaseType): pass class TypeB(BaseType): pass base_mapper = orm.mapper(BaseType, table, polymorphic_on=table.c.type, polymorphic_identity=None, ) orm.mapper(TypeA, inherits=base_mapper, polymorphic_identity='A', ) orm.mapper(TypeB, inherits=base_mapper, polymorphic_identity='B', ) Session = orm.sessionmaker(autocommit=False, autoflush=False) session = Session() Now, if I insert a new unmapped identity... engine.execute('INSERT INTO EXAMPLE_TYPES (TYPE) VALUES (\'C\')') session.query(BaseType).first() ...things break. Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/orm/query.py", line 1619, in first ret = list(self[0:1]) File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/orm/query.py", line 1528, in __getitem__ return list(res) File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/orm/query.py", line 1797, in instances rows = [process[0](row, None) for row in fetch] File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/orm/mapper.py", line 2179, in _instance _instance = polymorphic_instances[discriminator] File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/util.py", line 83, in __missing__ self[key] = val = self.creator(key) File ".../SQLAlchemy-0.6.5-py2.6.egg/sqlalchemy/orm/mapper.py", line 2341, in configure_subclass_mapper discriminator) AssertionError: No such polymorphic_identity u'C' is defined What I expected: >>> result = session.query(BaseType).first() >>> result <BaseType object at 0x1c8db70> >>> result.type u'C' I think this used to work with some older version of SQLAlchemy, but I haven't been keeping up with the development lately. Any pointers on how to accomplish this?

    Read the article

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • Django forms, inheritance and order of form fields

    - by Hannson
    I'm using Django forms in my website and would like to control the order of the fields. Here's how I define my forms: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): name = forms.CharField() The name is immutable and should only be listed when the entity is created. I use inheritance to add consistency and DRY principles. What happens which is not erroneous, in fact totally expected, is that the name field is listed last in the view/html but I'd like the name field to be on top of summary and description. I do realize that I could easily fix it by copying summary and description into create_form and loose the inheritance but I'd like to know if this is possible. Why? Imagine you've got 100 fields in edit_form and have to add 10 fields on the top in create_form - copying and maintaining the two forms wouldn't look so sexy then. (This is not my case, I'm just making up an example) So, how can I override this behavior? Edit: Apparently there's no proper way to do this without going through nasty hacks (fiddling with .field attribute). The .field attribute is a SortedDict (one of Django's internal datastructures) which doesn't provide any way to reorder key:value pairs. It does how-ever provide a way to insert items at a given index but that would move the items from the class members and into the constructor. This method would work, but make the code less readable. The only other way I see fit is to modify the framework itself which is less-than-optimal in most situations. In short the code would become something like this: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) self.fields.insert(0,'name',forms.CharField()) That shut me up :)

    Read the article

  • sp_addlinkserver using trigger

    - by Nanda
    I have the following trigger, which causes an error when it runs: CREATE TRIGGER ... ON ... FOR INSERT, UPDATE AS IF UPDATE(STATUS) BEGIN DECLARE @newPrice VARCHAR(50) DECLARE @FILENAME VARCHAR(50) DECLARE @server VARCHAR(50) DECLARE @provider VARCHAR(50) DECLARE @datasrc VARCHAR(50) DECLARE @location VARCHAR(50) DECLARE @provstr VARCHAR(50) DECLARE @catalog VARCHAR(50) DECLARE @DBNAME VARCHAR(50) SET @server=xx SET @provider=xx SET @datasrc=xx SET @provstr='DRIVER={SQL Server};SERVER=xxxxxxxx;UID=xx;PWD=xx;' SET @DBNAME='[xx]' SET @newPrice = (SELECT STATUS FROM Inserted) SET @FILENAME = (SELECT INPUT_XML_FILE_NAME FROM Inserted) IF @newPrice = 'FAIL' BEGIN EXEC master.dbo.sp_addlinkedserver @server, '', @provider, @datasrc, @provstr EXEC master.dbo.sp_addlinkedsrvlogin @server, 'true' INSERT INTO [@server].[@DBNAME].[dbo].[maildetails] ( 'to', 'cc', 'from', 'subject', 'body', 'status', 'Attachment', 'APPLICATION', 'ID', 'Timestamp', 'AttachmentName' ) VALUES ( 'P23741', '', '', 'XMLFAILED', @FILENAME, '4', '', '8', '', GETDATE(), '' ) EXEC sp_dropserver @server END END The error is: Msg 15002, Level 16, State 1, Procedure sp_MSaddserver_internal, Line 28 The procedure 'sys.sp_addlinkedserver' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_addlinkedsrvlogin, Line 17 The procedure 'sys.sp_addlinkedsrvlogin' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_dropserver, Line 12 The procedure 'sys.sp_dropserver' cannot be executed within a transaction. How can I prevent this error from occurring?

    Read the article

  • Error C2451: Illegal conditional expression of type 'UnaryOp<E1, Op>' in ostream - visual studio 9

    - by Steven Hill
    I am getting a repeated error with VS 9. The code compiles under GNU C++, but I want debug with the VS IDE. Any idea what could be causing this error. Error 13 error C2451: conditional expression of type 'UnaryOp' is illegal \Microsoft Visual Studio 9.0\VC\include\ostream 512 //unary constraint template class UnaryOp : public Constraint { public: const E1& e1; UnaryOp(const E1& _e1); bool Satisfiable() const; Bool SatisfiableAux() const; void Print (std::ostream& os) const; UnaryOp* clone () const; //operator bool () const { return true; } }; template std::ostream& operator<<(std::ostream& os, const UnaryOp& unop); UnaryOp code that uses ostream: template INLINE void UnaryOp::Print (std::ostream& os) const { os << *this; } template INLINE std::ostream& operator<<(std::ostream& os, const UnaryOp& unop) { return os << Op::name << unop.e1; } ostream line with error: _Myt& __CLR_OR_THIS_CALL put(_Elem _Ch) { // insert a character ios_base::iostate _State = ios_base::goodbit; const sentry _Ok(*this); 512 if (!_Ok) _State |= ios_base::badbit; else { // state okay, insert character _TRY_IO_BEGIN

    Read the article

  • Unable to generate a temporary class (result=1).\r\nerror CS0030:- c#

    - by ltech
    Running XSD.exe on my xml to generate C# class. All works well except on this property public DocumentATTRIBUTES[][] Document { get { return this.documentField; } set { this.documentField = value; } } I want to try and use CollectionBase, and this was my attempt public DocumentATTRIBUTESCollection Document { get { return this.documentField; } set { this.documentField = value; } } /// <remarks/> [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)] public partial class DocumentATTRIBUTES { private string _author; private string _maxVersions; private string _summary; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string author { get { return _author; } set { _author = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string max_versions { get { return _maxVersions; } set { _maxVersions = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string summary { get { return _summary; } set { _summary = value; } } } public class DocumentAttributeCollection : System.Collections.CollectionBase { public DocumentAttributeCollection() : base() { } public DocumentATTRIBUTES this[int index] { get { return (DocumentATTRIBUTES)this.InnerList[index]; } } public void Insert(int index, DocumentATTRIBUTES value) { this.InnerList.Insert(index, value); } public int Add(DocumentATTRIBUTES value) { return (this.InnerList.Add(value)); } } However when I try to serialize my object using XmlSerializer serializer = new XmlSerializer(typeof(DocumentMetaData)); I get the error: {"Unable to generate a temporary class (result=1).\r\nerror CS0030: Cannot convert type 'DocumentATTRIBUTES' to 'DocumentAttributeCollection'\r\nerror CS1502: The best overloaded method match for 'DocumentAttributeCollection.Add(DocumentATTRIBUTES)' has some invalid arguments\r\nerror CS1503: Argument '1': cannot convert from 'DocumentAttributeCollection' to 'DocumentATTRIBUTES'\r\n"}

    Read the article

  • Establishing Upper / Lower Bound in T-SQL Procedure

    - by Code Sherpa
    Hi. I am trying to establish upper / lower bound in my stored procedure below and am having some problems at the end (I am getting no results where, without the temp table inner join i get the expected results). I need some help where I am trying to join the columns in my temp table #PageIndexForUsers to the rest of my join statement and I am mucking something up with this statement: INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) I could use feedback at this point - and, any advice on how to improve my procedure's logic (if you see anything else that needs improvement) is also appreciated. Thanks in advance... ALTER PROCEDURE dbo.wb_Membership_GetAllUsers @ApplicationName nvarchar(256), @sortOrderId smallint = 0, @PageIndex int, @PageSize int AS BEGIN DECLARE @ApplicationId uniqueidentifier SELECT @ApplicationId = NULL SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications WHERE LOWER(@ApplicationName) = LoweredApplicationName IF (@ApplicationId IS NULL) RETURN 0 -- Set the page bounds DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @TotalRecords int SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageSize - 1 + @PageLowerBound BEGIN TRY -- Create a temp table TO store the select results CREATE TABLE #PageIndexForUsers ( IndexId int IDENTITY (0, 1) NOT NULL, UserId uniqueidentifier ) -- Insert into our temp table INSERT INTO #PageIndexForUsers (UserId) SELECT u.UserId FROM dbo.aspnet_Membership m, dbo.aspnet_Users u WHERE u.ApplicationId = @ApplicationId AND u.UserId = m.UserId ORDER BY u.UserName SELECT @TotalRecords = @@ROWCOUNT SELECT dbo.wb_Profiles.profileid, dbo.wb_ProfileData.firstname, dbo.wb_ProfileData.lastname, dbo.wb_Email.emailaddress, dbo.wb_Email.isconfirmed, dbo.wb_Email.emaildomain, dbo.wb_Address.streetname, dbo.wb_Address.cityorprovince, dbo.wb_Address.state, dbo.wb_Address.postalorzip, dbo.wb_Address.country, dbo.wb_ProfileAddress.addresstype,dbo.wb_ProfileData.birthday, dbo.wb_ProfileData.gender, dbo.wb_Session.sessionid, dbo.wb_Session.lastactivitydate, dbo.aspnet_Membership.userid, dbo.aspnet_Membership.password, dbo.aspnet_Membership.passwordquestion, dbo.aspnet_Membership.passwordanswer, dbo.aspnet_Membership.createdate FROM dbo.wb_Profiles INNER JOIN dbo.wb_ProfileAddress ON ( dbo.wb_Profiles.profileid = dbo.wb_ProfileAddress.profileid AND dbo.wb_ProfileAddress.addresstype = 'home' ) INNER JOIN dbo.wb_Address ON dbo.wb_ProfileAddress.addressid = dbo.wb_Address.addressid INNER JOIN dbo.wb_ProfileData ON dbo.wb_Profiles.profileid = dbo.wb_ProfileData.profileid INNER JOIN dbo.wb_Email ON ( dbo.wb_Profiles.profileid = dbo.wb_Email.profileid AND dbo.wb_Email.isprimary = 1 ) INNER JOIN dbo.wb_Session ON dbo.wb_Profiles.profileid = dbo.wb_Session.profileid INNER JOIN dbo.aspnet_Membership ON dbo.wb_Profiles.userid = dbo.aspnet_Membership.userid INNER JOIN dbo.aspnet_Users ON dbo.aspnet_Membership.UserId = dbo.aspnet_Users.UserId INNER JOIN dbo.aspnet_Applications ON dbo.aspnet_Users.ApplicationId = dbo.aspnet_Applications.ApplicationId INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) ORDER BY CASE @sortOrderId WHEN 1 THEN dbo.wb_ProfileData.lastname WHEN 2 THEN dbo.wb_Profiles.username WHEN 3 THEN dbo.wb_Address.postalorzip WHEN 4 THEN dbo.wb_Address.state END END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK TRAN EXEC wb_ErrorHandler RETURN 55555 END CATCH RETURN @TotalRecords END GO

    Read the article

  • Django 1.4.1 error loading MySQLdb module when attempting 'python manage.py shell'

    - by Paul
    I am trying to set up MySQL, and can't seem to be able to enter the Django manage.py shell interpreter. Getting the output below: rrdhcp-10-32-44-126:django pavelfage$ python manage.py shell Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/shell.py", line 45, in handle_noargs from django.db.models.loading import get_models File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 40, in <module> backend = load_backend(connection.settings_dict['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 34, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 24, in load_backend return import_module('.base', backend_name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 16, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named _mysql Any suggestions really appreciated.

    Read the article

  • Get id when inserting new row using TableAdapter.Update on a file based database

    - by phq
    I have a database table with one field, called ID, being an auto increment integer. Using a TableAdapter I can read and modify existing rows as well as create new ones. However if I try to modify a newly inserted row I get an DBConcurrencyException: OleDbConnection conn = new OleDbConnection(@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Shift.mdb;Persist Security Info=True"); ShiftDataSetTableAdapters.ShiftTableAdapter shiftTA = new ShiftDataSetTableAdapters.ShiftTableAdapter(); shiftTA.Connection = conn; ShiftDataSet.ShiftDataTable table = new ShiftDataSet.ShiftDataTable(); ShiftDataSet.ShiftRow row = table.NewShiftRow(); row.Name = "life"; table.Rows.Add(row); shiftTA.Update(row); // row.ID == -1 row.Name = "answer"; // <-- all fine up to here shiftTA.Update(row); // DBConcurrencyException: 0 rows affected Separate question, is there any static type of the NewShiftRow() method I can use so that I don't have to create table everytime I want to insert a new row. I guess the problem in the code comes from row.ID that is still -1 after the first Update() call. The Insert is successful and in the database the row has a valid value of ID. How can I get that ID so that I can continue with the second Update call? Update: IT looks like this could have been done automatically using this setting. However according to the answer on msdn social, OLEDB drivers do not support this feature. Not sure where to go from here, use something else than oledb? Update: Tried SQLCompact but discovered that it had the same limitation, it does not support multiple statements. Final question: is there any simple(single file based) database that would allow you to get the values of a inserted row.

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Android SMS API

    - by Schildmeijer
    I know that the SMS content provider is not part of the public API (at least not documented), but if I understand correctly it's still possible to use many of the SMS features as long as you know how to use the API(?). E.g it's pretty straightforward to insert an SMS into your inbox: ContentValues values = new ContentValues(); values.put("address", "+457014921911"); contentResolver.insert(Uri.parse("content://sms"), values); Unfortunately this does not trigger the standard "new-SMS-in-your-inbox" notification. Is it possible to trigger this manually? Edit: AFAIK the "standard mail application (Messaging)" in Android is listening for incoming SMSes using the android.permission.RECEIVE_SMS permission. And then, when a new SMS has arrived, a status bar notification is inserted with a "special" notification id. So one solution to my problem (stated above) could be to find, and send the correct broadcast intent; something like "NEW SMS HAS ARRIVED"-intent. Edit: Downloaded a third party messaging application (chompsms) from Android market. This application satisfies my needs better. When i execute the code above the chompsms notice the new sms and shows the "standard status bar notification". So I would say that the standard Android Messaging application is not detecting sms properly? Or am I wrong?

    Read the article

  • Declraing namespace schema with prefix in XSD/XML

    - by user1493537
    I am new to XML and I have a couple of questions about prefix. I need to "add the root schema element and insert the declaration for the XML schema namespace using the xc prefix. Set the default namespace and target of the schema to the URI test.com/test1" I am doing: <xc:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://test.com/test1" targetNamespace="http://test.com/test1"> </xc:schema> Is this correct? The next one is: "insert the root schema element, declaring the XML schema namespace with the xc prefix. Declare the library namespace using the lib prefix and the contributors namespace using the cont prefix. Set the default namespace and the schema target to URI test.com/test2" The library URI is http://test.com/library and contributor URI is test.com/contributor I am doing: <xc:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:lib="http://test.com/library" xmlns:clist="http://test.com/contributor" targetNamespace="http://test.com/test2"> </xc:schema> Does this look right? I am confused with prefix and all. Thanks for the help.

    Read the article

  • Intern working for Indian NGO - Help with PHP 4, advising staff

    - by Kevin Burke
    Hello, For the past three months I've been working for an Indian NGO (http://sevamandir.org), doing some volunteer work in the field but also trying to improve their website, which needs a ton of work. Recently I've been trying to fix the "subscribe to newsletter" button, which is broken. I used filter_var to filter the email input, but when I tried to test this out I got an error. Then I learned that the web host is still using php version 4.3.2 and register_globals is turned on. I've mentioned that they should upgrade their web host before (they are paying around $50 per year for Rediff Web Hosting, complete with 100MB storage and 1 MySQL database). That would add a lot of complexity for the IT staff of 3, who would have to update everyone's email information (I assume? this is a 250-person organization), and have me find a new web host and teach them about it. The staff isn't that sophisticated about web usage - the head guy still uses IE6, and the website's laid out in tables (they use Dreamweaver WYSIWYG to lay out pages). So I've got two options - use regular expressions to filter the email, which I'm not that skilled at doing (and would be more vulnerable to exploitation after I leave), turn off register globals and then try to teach the staff what I'm doing, or try to get them to upgrade their versions of PHP and MySQL and/or change web host. I'd appreciate some advice. Thanks for your help, Kevin

    Read the article

  • InsertOnSubmit - NullReferenceException

    - by Jackie Chou
    I have 2 Model AccountEntity [Table(Name = "Account")] public class AccountEntity { [Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int id { get; set; } [Column(CanBeNull = false, Name = "email")] public string email { get; set; } [Column(CanBeNull = false, Name = "pwd")] public string pwd { get; set; } [Column(CanBeNull = false, Name = "row_guid")] public Guid guid { get; set; } private EntitySet<DetailsEntity> details_id { get; set; } [Association(Storage = "details_id", OtherKey = "id", ThisKey = "id")] public ICollection<DetailsEntity> detailsCollection { get; set; } } DetailsEntity [Table(Name = "Details")] public class DetailsEntity { public DetailsEntity(AccountEntity a) { this.Account = a; } [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "int")] public int id { get; set; } private EntityRef<AccountEntity> _account = new EntityRef<AccountEntity>(); [Association(IsForeignKey = true, Storage = "_account", ThisKey = "id")] public AccountEntity Account { get; set; } } Main using (Database db = new Database()) { AccountEntity a = new AccountEntity(); a.email = "hahaha"; a.pwd = "13212312"; a.guid = Guid.NewGuid(); db.Account.InsertOnSubmit(a); db.SubmitChanges(); } that has relationhip AccountEntity <- DetailsEntity (1-n) when i'm trying to insert a record exception throws NullReferenceException cause: by EntitySet null please help me make it insert

    Read the article

  • LINQ to SQL - Tracking New / Dirty Objects

    - by Joseph Sturtevant
    Is there a way to determine if a LINQ object has not yet been inserted in the database (new) or has been changed since the last update (dirty)? I plan on binding my UI to LINQ objects (using WPF) and need it to behave differently depending whether or not the object is already in the database. MyDataContext context = new MyDataContext(); MyObject obj; if (new Random().NextDouble() > .5) obj = new MyObject(); else obj = context.MyObjects.First(); // How can I distinguish these two cases? The only simple solution I can think of is to set the primary key of new records to a negative value (my PKs are an identity field and will therefore be set to a positive integer on INSERT). This will only work for detecting new records. It also requires identity PKs, and requires control of the code creating the new object. Is there a better way to do this? It seems like LINQ must be internally tracking the status of these objects so that it can know what to do on context.SubmitChanges(). Is there some way to access that "object status"? Clarification Apparently my initial question was confusing. I'm not looking for a way to insert or update records. I'm looking for a way, given any LINQ object, to determine if that object has not been inserted (new) or has been changed since its last update (dirty).

    Read the article

  • REST application, Transactions, Cache drop

    - by Julian Davchev
    Hi, I am building REST API in php with memcache layer on top for caching all resources. After some reading experience it turns out it's best when documents are as simple as posible...mainly due to dropping cache sequences. So if there is 'building','room' entities for the 'room' document I would only place the id of the 'building' and not the whole data of it. Then on api client side I would merge data as needed. Problem becomes when I need to update/insert (most cases more than one table). I update one resource but on second update system fails or whatever and there becomes database inconsistancies. I see several solutions: 1. Implement rest transactions which I find wrong and complex as idea is to be stateless and easy. 2. On update/insert actions I pass more complex data (not single entities) so I can force transactions on API level. But this will make it weird that your GET document structure is same as PUT document structure. And again somehow make drop sequences complex. Any pointers are more than welcome. Cheers,

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >