Search Results

Search found 7884 results on 316 pages for 'ben record'.

Page 23/316 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Howoto get id of new record after model.save

    - by tonymarschall
    I have a model with the following db structure: mysql> describe units; +------------+----------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(128) | NO | | NULL | | | created_at | datetime | NO | | NULL | | | updated_at | datetime | NO | | NULL | | +------------+----------+------+-----+---------+----------------+ 7 rows in set (0.00 sec) After creating a new record an saving i can not get the id of the record. 1.9.3p194 :001 > unit = Unit.new(:name => 'test') => #<Unit id: nil, name: "test", created_at: nil, updated_at: nil> 1.9.3p194 :002 > unit.save (0.2ms) BEGIN SQL (0.3ms) INSERT INTO `units` (`created_at`, `name`, `updated_at`) VALUES ('2012-08-31 23:48:12', 'test', '2012-08-31 23:48:12') (144.6ms) COMMIT => true 1.9.3p194 :003 > unit.inspect => "#<Unit id: nil, name: \"test\", created_at: \"2012-08-31 23:48:12\", updated_at: \"2012-08-31 23:48:12\">" # unit.rb class Unit < ActiveRecord::Base attr_accessible :name end # migration class CreateUnits < ActiveRecord::Migration def change create_table :units do |t| t.string :name, :null => false t.timestamps end end end Tried this with other models and have the same result (no id). Data is definitily saved and i can get data with Unit.last Another try with Foo.id = nil # /var/www# rails g model Foo name:string invoke active_record create db/migrate/20120904030554_create_foos.rb create app/models/foo.rb invoke test_unit create test/unit/foo_test.rb create test/fixtures/foos.yml # /var/www# rake db:migrate == CreateFoos: migrating ===================================================== -- create_table(:foos) -> 0.3451s == CreateFoos: migrated (0.3452s) ============================================ # /var/www# rails c Loading development environment (Rails 3.2.8) 1.9.3p194 :001 > foo = Foo.new(:name => 'bar') => #<Foo id: nil, name: "bar", created_at: nil, updated_at: nil> 1.9.3p194 :002 > foo.save (0.2ms) BEGIN SQL (0.4ms) INSERT INTO `foos` (`created_at`, `name`, `updated_at`) VALUES ('2012-09-04 03:06:26', 'bar', '2012-09-04 03:06:26') (103.2ms) COMMIT => true 1.9.3p194 :003 > foo.inspect => "#<Foo id: nil, name: \"bar\", created_at: \"2012-09-04 03:06:26\", updated_at: \"2012-09-04 03:06:26\">" 1.9.3p194 :004 > Foo.last Foo Load (0.5ms) SELECT `foos`.* FROM `foos` ORDER BY `foos`.`id` DESC LIMIT 1 => #<Foo id: 1, name: "bar", created_at: "2012-09-04 03:06:26", updated_at: "2012-09-04 03:06:26">

    Read the article

  • Automatically update or delete record(s) after x time in coldfusion

    - by Nich
    Hi, I've searched all over the net for this. Hope that someone's got something. How would a record in a database be updated automatically after x time n coldfusion? I know how to manually do it by writing an sql that performs an action to all records older than x time based on the timestamp. How would this be done automatically? Kind Regards, Nich

    Read the article

  • retrieving the record which has only one value

    - by ashwani66476
    Hello All, Please suggest me a query, which retrieves only those record which has the single row in table. For Example table1. name age aaa 20 bbb 10 ccc 20 ddd 30 If I run "select distinct age from table1. result will be age 20 10 30 But I need a query, which give the result like name age bbb 10 ddd 30 Thanks....

    Read the article

  • Use subform record set as domain argument in DAvg()

    - by harto
    Is it possible to use a subform's 'current' record set as the domain argument to DAvg() (etc.)? Basically, I have a subform that displays a subset of records from a query. I would like to run DAvg() over this subset. This is how I've gotten around it: =DAvg([FieldToAvg], [SubformQuery], "ChildField=Forms.MasterForm.MasterField And FieldToAvg > 0") but what I actually want is something like: =DAvg([FieldToAvg], [SubformCurrentlyDisplayedData], "FieldToAvg > 0") Is this possible in Access 2007?

    Read the article

  • Selenium IDE and telling it to record actions

    - by sprog
    I am trying to make a little application to allow to record actions within a Flash and Silverlight application. In such manner that you can compile your interactive application in test-mode and then be able to click on elements which then passed the action to Selenium IDE which then adds this command to the testcase. I am curious if this even possible and how I can achieve this in Firefox?

    Read the article

  • Silverlight MVVM add record from user control

    - by strattonn
    I have a User Control for searching container numbers. If the user enters a container number that's new to the system then I want to tell the VM "I have a new record to add". The MVVM method avoids using Events to communicate with the VM as they create code-behind. Should I create a Dependency Property to trigger the VM but I don't think I've seen other controls with a "NewRecord" property? Any thoughts?

    Read the article

  • Best way to update record X when Y is inserted

    - by Saif Bechan
    I have a huge table that is mainly used for backup and administrative purposes. The only records that matters is the last inserted record. On every hit to order by time inserted is just too slow. I want keep a separate table with the last inserted id. In PHP I now insert, get last inserted id, and update the other table. Is there a more efficient way to do this.

    Read the article

  • Select a record with highest amount by joining two tables

    - by user2516394
    I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate*Quantity. SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1

    Read the article

  • Why can't I record 16khz sampling audio using my laptop?

    - by KayKay
    I want to know why my laptop can't record 16khz sampling audio. The sampling rates I can have using my laptop are higher than 16khz. e.g, 44khz, 48khz, 192khz, and so on... I need to record 16khz sampling audio using my laptop. Sound card in my laptop is Conexant 20671 SmartAudio HD Although I can record 16khz sampling by Sound Forge 8.0, I am doubt whether the recorded audio is really 16khz sampling or not. Because the sound card can't record 16khz sampling, I think there may be some problems on the recording process. Could you give me any hint why the sound card can't record 16khz? and any method to identify whether the recorded audio by Sound Forge 8.0 is really 16khz? Thanks.

    Read the article

  • What is the name for a DNS record starting with @? [closed]

    - by dunxd
    Possible Duplicate: What's the meaning of '@' in a DNS zone file? I know that DNS records starting with * are called Wildcard records. What is the name for DNS record starting with @ (the at symbol). This is a record for the root domain (e.g. just example.com, not www.example.com) I want to find out more, but searching for "@ record dns" in Google doesn't return any useful results. What is the correct terminology for this type of record, and where might I find it described in more detail? RFC 1035 describes the use of @ in a DNS record, but doesn't go as far as giving it a name.

    Read the article

  • How can I copy a SQL record which has related records in other tables to the same database?

    - by DerekVS
    Hi. I created a function in C# which allows me to copy a record and its related children to a new record and new related children in the same database. (This is for an application that allows the use of previous work as a template for new work.) Anyway, it works great... Here's a description of how it accomplishes the copy: It populates a two-column memory-based look-up table with the current primary key of each record. Next, as it individually creates each new copy record, it updates the look-up table with the Identity PK of the new record [retrieved from SCOPE_IDENTITY()]. Now, when it copies over any related children, it can look up the new parent PK to set the FK on the new record. In testing, it only took a minute to copy a relational structure on a local instance of SQL Server 2005 Express Edition. Unfortunately it is proving to be horribly slow in production! My users are dealing with 60,000+ records per parent record over the LAN to our SQL Server! While my copy function still works, each of those records represents an individual SQL UPDATE command and it loads the SQL Server at about 17% CPU from its normal 2% idle. I just finished testing a 50,000 record copy and it took almost 20 minutes! Is there a way to duplicate this functionality in SQL queries or stored procecures to make the SQL server do all of the copy work instead of blasting it over the LAN from each client? (We're running Microsoft SQL Server 2005 Standard Edition.) Thanks! -Derek

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • DNS Help: Move domain, not mailserver

    - by Preserved
    I'm in the middle of launching a new website for an already-in-use domain. The domain has a complicated email system so we'd like to move that over to the new server a bit later on. Currently the domain DNS is managed by the current webhost. I plan on moving the DNS management back to Network Solutions, then point the A record to the new website's IP. However, currently the DNS has the MX record the same as the A record. When NetworkSolutions is managing the DNS, and I point the A record to the new IP, then the MX record can't be the A record.. Right now: A Record mydomain.com points to IP address 198.198.198.198 MX record mydomain.com points to IP address 198.198.198.198 What I want: A Record mydomain.com points to IP address of new server MX record somehow points to current existing mailserver Does this even make sense?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >