Search Results

Search found 5233 results on 210 pages for 'records'.

Page 149/210 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Using fixtures with factory_girl

    - by deb
    When building the following factory: Factory.define :user do |f| f.sequence(:name) { |n| "foo#{n}" } f.resume_type_id { ResumeType.first.id } end ResumeType.first returns nil and I get an error. ResumeType records are loaded via fixtures. I checked using the console and the entries are there, the table is not empty. I've found a similar example in the factory_girl mailing list, and it's supposed to work. What am I missing? Do I have to somehow tell factory_girl to set up the fixtures before running the tests?

    Read the article

  • Mysql query help needed

    - by Me-and-Coding
    Hi, i have two tables category and hotels where category.id should be equal to hotels.catid. Now how do i select 3 rows from each different category from the hotels table. I have this query: select h.* from hotels h inner join category c on h.catid = c.id order by h.catid, h.hid this selects all records, but i want to select three rows per different category so in all it should return 9 rows with 3 rows for each category. If this can not be done in mysql, you could also suggest php code please. Thanks

    Read the article

  • how to convert numbers to alpha numeric system with php

    - by Patrick
    Im not sure what this is called, which is why im having trouble searching for it. What im looking to do, is take numbers, and convert them to some alpha numeric base so that the number say 5000, wouldnt read 5000, but like G4u or something like that. The idea being to save space and also not make it obvious how many records there are in a given system. Im using php, so if there is something like this built into php even better, but even a name for this method would be helpful at this point Again, sorry for not being able to be more clear, im just not sure what this is called.

    Read the article

  • Can the Sequence of RecordSets in a Multiple RecordSet ADO.Net resultset be determined, controlled?

    - by Shiva
    I am using code similar to this Support / KB article to return multiple recordsets to my C# program. But I don't want C# code to be dependant on the physical sequence of the recordsets returned, in order to do it's job. So my question is, "Is there a way to determine which set of records from a multiplerecordset resultset am I currently processing?" I know I could probably decipher this indirectly by looking for a unique column name or something per resultset, but I think/hope there is a better way. P.S. I am using Visual Studio 2008 Pro & SQL Server 2008 Express Edition.

    Read the article

  • How to have partial incremental synchronizations based on a GUID?

    - by Gonçalo Veiga
    I need to synchronize an SQL Server database to Oracle through an Oracle Transparent Gateway. The synchronization is performed in batches, so I need to get the next set of data from the point where I left off. The problem I'm having is that the only field I have in the source, to help me, is a GUID. If it were a number I could just order by it, keep the last one processed and restart the process by getting the records which are my recorded number. This won't work with a GUID. Any ideas?

    Read the article

  • Why do I have a page hit for 404.php after each legitimate pagehit?

    - by Nathan Long
    I'm working with an intranet system that, on each page, checks the user's cookie, verifies that they can see the current page based on database permissions, and records a page hit that includes their id and the page URL. I just noticed that in the pagehits table, I see an entry for 404.php (my custom 404 page specified in the Apache config) one second after each legitimate page hit. Is this probably my fault, or does it have something to do with how Apache decides to load the 404 page? I'm using Apache 2.2.14 (Win32) and PHP 5.3.2.

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • Creating a virtual, data-driven section of a Wordpress-powered site

    - by lgomez
    Hello all, I want to create a plugin for wordpress to automatically serve pages containing data pulled from a provider's API. The API returns one or more records containing data for that record and I simply want to have the plugin intercept the request, call the API with parameters pulled from the request URI and display the data using a template that I can either let them upload to the server or let them copy and paste into the plugins admin settings. For example, I may want one of my wordpress installations to show products pulled from such an API under the url "example.com/products". The plugin would catch that request, extract the variables from the URL, call the API and render the template with the returned results. I'd like to avoid requiring editing the .htaccess file like some caching plugins do. Some of the admins of these pages won't know how to do that or simply won't have access to the .htaccess file. Thanks!

    Read the article

  • Can't call method in model table class using Doctrine with Zend Framework

    - by Jeremy Hicks
    I'm using Doctrine with Zend Framework. For my model, I'm using a base class, the regular class (which extends the base class), and a table class. In my table class, I've created a method which does a query for records with a specific value for one of the fields in my model. When I try and call this method from my controller, I get an error message saying, "Message: Unknown method Doctrine_Table::getCreditPurchases". Is there something else I need to do to call functions in my table class? Here is my code: class Model_CreditTable extends Doctrine_Table { /** * Returns an instance of this class. * * @return object Model_CreditTable */ public static function getInstance() { return Doctrine_Core::getTable('Model_Credit'); } public function getCreditPurchases($id) { $q = $this->createQuery('c') ->where('c.buyer_id = ?', $id); return $q->fetchArray(); } } // And then in my controller method I have... $this->view->credits = Doctrine_Core::getTable('Model_Credit')->getCreditPurchases($ns->id);

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • Read a file with 2048 bytes

    - by Suresh S
    Guys i have a file which has only one line. The file has no encoding it is a simple text file with single line. For every 2048 byte in a line , there is new record of 151 byte (totally 13*151 byte = 1945 records + 85 byte empty space). similarly for the next 2048 bytes. What is the best file i/o to use? i am thinking of reading 2048 bytes from file and storing it in an array . while (offset < fileLength &&(numRead=in.read(recordChunks, offset,alength)) >= 0) { } how can i get from the read statement only 2048 bytes at a time . i am getting IndexOutofBoundException.

    Read the article

  • App Engine: how would you... snapshotting entities

    - by Andrew B.
    Let's say you have two kinds, Message and Contact, related by a db.ListProperty of keys on Message. A user creates a message, adds some contacts as recipients, and emails the message. Later, the user deletes one of the contact entities that was a recipient of the message. Our application should delete the appropriate Contact entity, but we want to preserve the original recipient list for the message that was sent for the user's records. In essence, we want a snapshot of the message entity at the time it was sent. If we naively delete the contact entity, though, we lose snapshot integrity; if not, we are left with an invalid key. How would you handle this situation, either in controller logic or model changes? class User(db.Model): email = db.EmailProperty(required=True) class Contact(db.Model): email = db.EmailProperty(required=True) user = db.ReferenceProperty(User, collection_name='contacts') class Message(db.Model): recipients = db.ListProperty(db.Key) # contacts sender = db.ReferenceProperty(User, collection_name='messages') body = db.TextProperty() is_emailed = db.BooleanProperty(default=False)

    Read the article

  • Obtain patterns in one file from another using ack or awk or better way than grep?

    - by Rock
    Is there a way to obtain patterns in one file (a list of patterns) from another file using ack as the -f option in grep? I see there is an -f option in ack but it's different with the -f in grep. Perhaps an example will give you a better idea. Suppose I have file1: file1: a c e And file2: file2: a 1 b 2 c 3 d 4 e 5 And I want to obtain all the patterns in file1 from file2 to give: a 1 c 3 e 5 Can ack do this? Otherwise, is there a better way to handle the job (such like awk or using hash) because I have millions of records in both files and really need an efficient way to complete? Thanks!

    Read the article

  • Paging Recordsets from SQL Serverside

    - by Jonno
    I've been banging my head off this one for a while. I want to call 1k records from a SQL database and page them per 100. In classic ASP (where I'm moving from) this was dead easy to do with ADODB but with VB using ADO.net I can't find a single way that doesn't involve stored procs (which I want to avoid for now). It seems really stupid to call all 1k and sort it programmatically. Edit: It's SQL Server 2005 / .net 4.0 / Visual Studio 2010. Edit 2: Just to reiterate, I have Googled extensively and don't want to use stored procedures. There are many ways to get paged data but everything I see involves paging the data in the program rather than from the server.

    Read the article

  • JTable Design Guide in Swing Application

    - by zwang
    I have a really hard design problem in my swing application. Generally Speaking, I have a JTable and a JLabel to display. The label is right below the JTable. When the height of these two components doesn't exceed a threshold, then the jtable and label displays as normal. Then height of these two components will increase as the number of records filled in the table increases. When the height of these two components exceeds a threshold, I want there is a scrollbar in the JTable and the height of these two components will be the threshold. Is this design Possible? I have draft to illustate my UI design requirement. But I don't know how to post it in this forum. And any reply are appreciate. Best Regards, Zheng.

    Read the article

  • Baffled by PHP escaping of double-quotes in HTML forms

    - by rjray
    I have a simple PHP script I use to front-end an SQLite database. It's nothing fancy or complex. But I have noticed from looking at the records in the database that anything I enter in a form-field with double-quotes comes across in the form-processing as though I'd escaped the quotes with a backslash. So when I entered a record with the title: British Light Utility Car 10HP "Tilly" what shows up in the database is: British Light Utility Car 10HP \"Tilly\" I don't know where these are coming from, and what's worse, even using the following preg_replace doesn't seem to remove them: $name = preg_replace('/\\"/', '"', $_REQUEST['kits_name']); If I dump out $name, it still bears the unwanted \ characters.

    Read the article

  • asp.net mvc dataannotation different table

    - by mazhar kaunain baig
    i have a lang table which is as a foreign key in the link table , The link can be in 3 languages meaning there will be 3 rows in the link table everytime i enter the record. i am using jquery tabs to enter the records in 3 languages . ok so what architecture i should follow for validation with datannotation attributes. i am using link to sql with 2010 vs. i will be creating link class with MetadataType so how will i handle for eg link name attribute 3 times.

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • Group and count in Rails

    - by alamodey
    I have this bit of code and I get an empty object. @results = PollRoles.find( :all, :select => 'option_id, count(*) count', :group => 'option_id', :conditions => ["poll_id = ?", @poll.id]) Is this the correct way of writing the query? I want a collection of records that have an option id and the number of times that option id is found in the PollRoles model. EDIT: This is how I''m iterating through the results: <% @results.each do |result| %> <% @option = Option.find_by_id(result.option_id) %> <%= @option.question %> <%= result.count %> <% end %>

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • LINQ in SQLite for Windows store app does not have 'ThenBy' to order by multiple columns

    - by user1131657
    I have a Windows 8 store application and I'm using the latest version on SQLite for my database. So I want to return some records from the database and I want to order them by more that one column. However SQLite doesn't seem to have the ThenBy statement? So my LINQ statement is below: from i in connection.Table<MyTable>() where i.Type == type orderby i.Usage_Counter // ThenBy i.ID select i); So how do I sort by multiple columns in SQLite without doing another LINQ statement?

    Read the article

  • get count from Iqueryable<T> in linq-to-sql?

    - by Pandiya Chendur
    The following code doesn't seem to get the correct count..... var materials = consRepository.FindAllMaterials().AsQueryable(); int count = materials.Count(); Is it the way to do it.... Here is my repository which fetches records... public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id where m.Is_Deleted == 0 select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; }

    Read the article

  • Looking for suggestions about an architecture of a MultiThreaded app.

    - by Dimitri
    Hello everyone. I am looking to develop a multithreaded application that will be running in unconditional loop and processing high volume of data. High volume is 2000+ records per minute. Processing will involve data retrieval, calculations and data updates. I need the application to perform so that there is virtually no back log, meaning i need to be able to finish up all of the 2000 points in one minute or even faster. Our current implementation features a multithreaded application that is spawn multiple times (from 10 to 20) and we are noticing that it's not handling data as expected and i even feel that instances of the application compete with each other for processor time and eventually if not slowing, not benefiting each other for sure. I would like to know what would be the best approach: have a single instance running but maximize threads that can run simultaneously? or is there other ways i don't know? I'm open to suggestions. Thank you in advance

    Read the article

  • Understanding CGI and SQL security from the ground up

    - by Steve
    This question is for learning purposes. Suppose I am writing a simple SQL admin console using CGI and Python. At http://something.com/admin, this admin console should allow me to modify a SQL database (i.e., create and modify tables, and create and modify records) using an ordinary form. In the least secure case, anybody can access http://something.com/admin and modify the database. You can password protect http://something.com/admin. But once you start using the admin console, information is still transmitted in plain text. So then you use HTTPS to secure the transmitted data. Questions: To describe to a learner, how would you incrementally add security to the least secure environment in order to make it most secure? How would you modify/augment my three (possibly erroneous) steps above? What basic tools in Python make your steps possible? Optional: Now that I understand the process, how do sophisticated libraries and frameworks inherently achieve this level of security?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >