Search Results

Search found 6399 results on 256 pages for 'record'.

Page 225/256 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • SQL with HAVING and temp table not working in Rails

    - by chrisrbailey
    I can't get the following SQL query to work quite right in Rails. It runs, but it fails to do the "HAVING row_number = 1" part, so I'm getting all the records, instead of just the first record from each group. A quick description of the query: it is finding hotel deals with various criteria, and in particular, priortizing them being paid, and then picking the one with the highest dealrank. So, if there are paid deal(s), it'll take the highest one of those (by dealrank) first, if no paid deals, it takes the highest dealrank unpaid deal for each hotel. Using MAX(dealrank) or something similar does not work as a way to pick off the first row of each hotel group, which is why I have the enclosing temptable and the creation of the row_number column. Here's the query: SELECT *, @num := if(@hid = hotel_id, @num + 1, 1) as row_number, @hid := hotel_id as dummy FROM ( SELECT hotel_deals.*, affiliates.cpc, (CASE when affiliates.cpc 0 then 1 else 0 end) AS paid FROM hotel_deals INNER JOIN hotels ON hotels.id = hotel_deals.hotel_id LEFT OUTER JOIN affiliates ON affiliates.id = hotel_deals.affiliate_id WHERE ((hotel_deals.percent_savings = 0) AND (hotel_deals.booking_deadline = ?)) GROUP BY hotel_deals.hotel_id, paid DESC, hotel_deals.dealrank ASC) temptable HAVING row_number = 1 I'm currently using Rails' find_by_sql to do this, although I've also tried putting it into a regular find using the :select, :from, and :having parts (but :having won't get used unless you have a :group as well). If there is a different way to write this query, that'd be good to know too. I am using Rails 2.3.5, MySQL 5.0.x.

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • Accessing an Access DB from Outlook via VBA

    - by camastanta
    Hi The situation: In Outlook I get a message from a server. The content of the message needs to be put into an Access db. But, there may not exist another message with the same date. So, I need to look into a db if there is already a message with the same date and time. If there exists one, then it needs to be replaced and otherwise the message needs to be added to the database. The database contains a list of current positions from the vehicles on the road. The problem: I have problems to compare a date time with a date time in an Access DB via VBA. The query I use returns no records but there is a record in the database. This is the query I use: adoRS.Open "SELECT * FROM currentpositions WHERE ((currentpositions. [dateLT])=" & "#" & date_from_message & "#" & ")", adoConn, adOpenStatic, adLockOptimistic Second I need to now what the result is of that query. How can I determine the number of records that my query gives me? Thanks camastanta

    Read the article

  • How to make ActiveRecord work with legacy partitioned/sharded databases/tables?

    - by Utensil
    thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around. My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below: way database and table name shard by (maybe it's should be called partitioned by?) YZ.X db_YZ.tb_X order serial number last three digits YYYYMMDD. db_YYYYMMDD.tb date YYYYMM.DD db_YYYYMM.tb_ DD date too The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution. I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most. But this is just not DRY! What I need is a clean solution like the following DSL: class Order < ActiveRecord::Base shard_by :order_serialno do |key| [get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const get_db_name_by(key), get_tb_name_by(key), ] end end Can anybody enlight me? Any help would be greatly appreciated~~~~

    Read the article

  • Mysql count and sum from two diferent tables

    - by Agent_x
    Hi all, i have a problem with some querys in php and mysql: I have 2 diferent tables with one field in common: table 1 id | hits | num_g | cats | usr_id |active 1 | 10 | 11 | 1 | 53 | 1 2 | 13 | 16 | 3 | 53 | 1 1 | 10 | 22 | 1 | 22 | 1 1 | 10 | 21 | 3 | 22 | 1 1 | 2 | 6 | 2 | 11 | 1 1 | 11 | 1 | 1 | 11 | 1 table 2 id | usr_id | points 1 | 53 | 300 Now i use this statement to sum just the total from the table 1 every id count + 1 too SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) AS tot_h FROM table1 WHERE usr_id!='0' GROUP BY usr_id ASC LIMIT 0 , 15 and i get the total for each usr_id usr_id| tot_h | 53 | 50 22 | 63 11 | 20 until here all is ok, now i have a second table with extra points (table2) I try this: SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) + (SELECT points FROM table2 WHERE usr_id != '0' ) AS tot_h FROM table1 WHERE usr_id != '0' GROUP BY usr_id ASC LIMIT 0 , 15 but it seems to sum the 300 extra points to all users: usr_id| tot_h | 53 | 350 22 | 363 11 | 320 Now how i can get the total like the first try but + the secon table in one statement? because now i have just one entry in the second table but i can be more there. thanks for all the help. =============================================================================== hi thomas thanks for your reply, i think is in the right direction, but im getting weirds results, like usr_id | tot_h 22 | NULL <== i think the null its because that usr_id as no value in the table2 53 | 1033 Its like the second user is getting all the the values. then i try this one: SELECT table1.usr_id, COUNT( table1.id ) + SUM( table1.num_g + table1.hits + table2.points ) AS tot_h FROM table1 LEFT JOIN table2 ON table2.usr_id = table1.usr_id WHERE table1.usr_id != '0' AND table2.usr_id = table1.usr_id GROUP BY table1.usr_id ASC Same result i just get the sum of all values and not by each user, i need something like this result: usr_id | tot_h 53 | 53 <==== plus 300 points on table1 22 | 56 <==== plus 100 points on table2 /////////the result i need //////////// usr_id | tot_h 53 | 353 <==== plus 300 points on table2 22 | 156 <==== plus 100 points on table2 I think the structure need to be something like this Pseudo statements ;) from table1 count all id to get the number of record where the usr_id are then sum hits + num_g and from table2 select the extra points where the usr_id are the same as table1 and get teh result: usr_id | tot_h 53 | 353 22 | 156

    Read the article

  • optimistic and pessimistic locks

    - by billmce
    Working on my first php/Codeigniter project and I’ve scoured the ‘net for information on locking access to editing data and haven’t found very much information. I expect it to be a fairly regular occurrence for 2 users to attempt to edit the same form simultaneously. My experience (in the stateful world of BBx, filePro, and other RAD apps) is that the data being edited is locked using a pessimistic lock—one user has access to the edit form at the time. The second user basically has to wait for the first to finish. I understand this can be done using Ajax sending XMLHttpRequests to maintain a ‘lock’ database. The php world, lacking state, seems to prefer optimistic locking. If I understand it correctly it works like this: both users get to access the data and they each record a ‘before changes’ version of the data. Before saving their changes, the data is once again retrieved and compared the ‘before changes’ version. If the two versions are identical then the users changes are written. If they are different; the user is shown what has changed since he/she started editing and some mechanism is added to resolve the differences—or the user is shown a ‘Sorry, try again’ message. I’m interested in any experience people here have had with implementing both pessimistic and optimistic locking. If there are any libraries, tools, or ‘how-to’s available I’m appreciate a link. Thanks

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Rationale in selecting Hash Key type

    - by Amrish
    Guys, I have a data structure which has 25 distinct keys (integer) and a value. I have a list of these objects (say 50000) and I intend to use a hash table to store/retrieve them. I am planning to take one of these approaches. Create a integer hash from these 25 integer keys and store it on a hash table. (Yeah! I have some means to handle collisions) Make a string concatenation on the individual keys and use it as a hash key for the hash table. For example, if the key values are 1,2,4,6,7 then the hash key would be "12467". Assuming that I have a total of 50000 records each with 25 distinct keys and a value, then will my second approach be a overkill when it comes to the cost of string comparisons it needs to do to retrieve and insert a record? Some more information! Each bucket in the hash table is a balanced binary tree. I am using the boost library's hash_combine method to create the hash from the 25 keys.

    Read the article

  • setMessage for Zend_Validate_EmailAddress doesn't work

    - by iSenne
    Hello everybody. I have a form and I want to set my custom errors in it. I am using Zend, and I have the following code... //Create validators $formMustBeEmail = new Zend_Validate_EmailAddress(); $formMustBeEmail->setMessage(array( Zend_Validate_EmailAddress::INVALID => "1. Invalid type given, value should be a string", Zend_Validate_EmailAddress::INVALID_FORMAT => "2. '%value%' is no valid email address in the basic format local-part@hostname", Zend_Validate_EmailAddress::INVALID_HOSTNAME => "3. '%hostname%' is no valid hostname for email address '%value%'", Zend_Validate_EmailAddress::INVALID_MX_RECORD => "4. '%hostname%' does not appear to have a valid MX record for the email address '%value%'", Zend_Validate_EmailAddress::INVALID_SEGMENT => "5. '%hostname%' is not in a routable network segment. The email address '%value%' should not be resolved from public network.", Zend_Validate_EmailAddress::DOT_ATOM => "6. '%localPart%' can not be matched against dot-atom format", Zend_Validate_EmailAddress::QUOTED_STRING => "7. '%localPart%' can not be matched against quoted-string format", Zend_Validate_EmailAddress::INVALID_LOCAL_PART => "8. '%localPart%' is no valid local part for email address '%value%'", Zend_Validate_EmailAddress::LENGTH_EXCEEDED => "9. '%value%' exceeds the allowed length", Then I make the form... $this->addElement('text', 'email'); $emailElement = $this->getElement('email'); $emailElement ->setLabel('Emailadres') ->setOrder(1) ->setRequired(true) ->addValidator($formMustBeTest) ->addValidator($formMustBeEmail) ->addFilter(new Zend_Filter_StripTags()); But it doesn't work. I still get the normal errors made by Zend. Can anyone see what I am doing wrong? Tnx in advanced...

    Read the article

  • AudioRecord - empty buffer

    - by Arxas
    I' m trying to record some audio using AudioRecord class. Here is my code: int audioSource = AudioSource.MIC; int sampleRateInHz = 44100; int channelConfig = AudioFormat.CHANNEL_IN_MONO; int audioFormat = AudioFormat.ENCODING_PCM_16BIT; int bufferSizeInShorts = 44100; int bufferSizeInBytes = 2*bufferSizeInShorts; short Data[] = new short[bufferSizeInShorts]; Thread recordingThread; AudioRecord audioRecorder = new AudioRecord(audioSource, sampleRateInHz, channelConfig, audioFormat, bufferSizeInBytes); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } public void startRecording(View arg0) { audioRecorder.startRecording(); recordingThread = new Thread(new Runnable() { public void run() { while (Data[bufferSizeInShorts-1] == 0) audioRecorder.read(Data, 0, bufferSizeInShorts); } }); audioRecorder.stop(); } Unfortunately my short array is empty after the recording is over. May I kindly ask you to help me figure out what's wrong?

    Read the article

  • Python modules, classs, functions documentation through Sphinx

    - by user343934
    Hi everyone, I am trying to document my small project through sphinx which im recently trying to get familiar with. I read some tutorials and sphinx documentation but couldn't make it. Setup and configurations are ok! just have problems in using sphinx in a technical way. My table of content should look like this --- Overview .....Contents ----Configuration ....Contents ---- System Requirements .....Contents ---- How to use .....Contents ---- Modules ..... Index ......Display ----Help ......Content Moreover my focus is on Modules with docstrings. Details of Modules are Directory:- c:/wamp/www/project/ ----- Index.py >> Class HtmlTemplate: .... def header(): .... def body(): .... def form(): .... def header(): .... __init_main: ----- display.py >> Class MainDisplay: .... def execute(): .... def display(): .... def tree(): .... __init_main: My Documentation Directory:- c:/users/abc/Desktop/Documentation/doc/ --- _build --- _static --- _templates --- conf.py --- index.rst I have added Modules directory to the system environment and edited index.rst with following codes Welcome to Seq-alignment's documentation! Contents: .. toctree:: :maxdepth: 2 .. automodule:: index.py .. autoclass:: HtmlTemplate :members:Header,Body,Form,Footer,CloseHtml .. automodule:: display.py .. autoclass:: MainDisplay :members:execute,display,tree Indices and tables :ref:genindex :ref:modindex :ref:search When i make html file and view it, apparently i dont get Modules in the content tables but just there is show record and when i click it just i get "index.txt" version in another window. I need your suggestions Thanks

    Read the article

  • How to obtain value of auto_increment field in Access linked to MySQL?

    - by Cruachan
    I'm trying to modify and existing Access application to use MySQL as a database via ODBC with the minimal amount of recoding. The current code will often insert a new record using DAO then obtain the ID by using LastModified. This doesn't work with MySQL. Instead I'm trying to use the approach using SELECT * FROM tbl_name WHERE auto_col IS NULL Suggested for Access in the MySQL documentation. However if I set up a sample table consisting of just an id and text data field and execute this CurrentDb.Execute ("INSERT INTO tbl_scratch (DATA) VALUES ('X')") Set rst = CurrentDb.OpenRecordset("SELECT id FROM tbl_scratch WHERE id IS NULL") myid = rst!id Id is returned as null. However if I execute INSERT INTO tbl_scratch (DATA) VALUES ('X'); SELECT id FROM tbl_scratch WHERE id IS NULL; using a direct MySQL editor then id is returned correctly, so my database and approach is fine but my implementation inside Access must be incorrect. Frustratingly the MySQL documentation gives the SQL statement to retrieve the id as an example that works in Access (as it states LAST_INSERT_ID() doesn't) but gives no further details. How might I fix this?

    Read the article

  • SQL Server: String Manipulation, Unpivoting

    - by OMG Ponies
    I have a column called body, which contains body content for our CMS. The data looks like: ...{cloak:id=1.1.1}...{cloak}...{cloak:id=1.1.2}...{cloak}...{cloak:id=1.1.3}...{cloak}... A moderately tweaked for readability example: ## h5. A formal process for approving and testing all external network connections and changes to the firewall and router configurations? {toggle-cloak:id=1.1.1}{tree-plus-icon} *Compliance:* {color:red}{*}Partial{*}{color} (?) {cloak:id=1.1.1} || Date: | 2010-03-15 || || Owner: | Brian || || Researched by: | || || Narrative: | Jira tickets are normally used to approve and track network changes\\ || || Artifacts: | Jira.bccampus.ca\\ || || Recommendation: | Need to update policy that no Jira = no change\\ || || Proposed Remedy(ies): | || || Approved Remedy(ies): | || || Date: | || || Reviewed by: | || || Remarks/comments: | || {cloak}## h5. Current network diagrams with all connections to cardholder data, including any wireless networks? {toggle-cloak:id=1.1.2}{tree-plus-icon} *Compliance:* {color:red}{*}TBD{*}{color} (?) {cloak:id=1.1.2} I'd like to get the cloak values out in the following format: requirement_num ----------------- 1.1.1 1.1.2 1.1.3 I'm looking at using UNIONs - does anyone have a better recommendation? Forgot to mention: I can't use regex, because CLR isn't enabled on the database. The numbers aren't sequencial. The current record jumps from 1.1.6 to 1.2.1

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • Passing models between controllers in MVC4

    - by wtfsven
    I'm in the middle of my first foray into web development with MVC4, and I've run into a problem that I just know is one of the core issues in web development in general, but can't seem to wrap my mind around it. So I have two models: Employee and Company. Employee has a Company, therefore /Employee/Create contains a field that is either a dropdown list for selecting an existing Company, or an ActionLink to /Company/Create if none exists. But here's the issue: If a user has to create a Company, I need to preserve the existing Employee model across both Views--over to /Company/Create, then back to /Employee/Create. So my first approach was to pass the Employee model to /Company/Create with a @Html.ActionLink("Add...", "Create", "Organization", Model , null), but then when I get to Company's Create(Employee emp), all of emp's values are null. The second problem is that if I try to come back to /Employee/Create, I'd be passing back an Employee, which would resolve to the POST Action. Basically, the whole idea is to save the state of the /Employee/Create page, jump over to /Company/Create to create a Company record, then back to /Employee/Create to finish creating the Employee. I know there is a well understood solution to this because , but I can't seem to phase it well enough to get a good Google result out of it. I've though about using Session, but it seems like doing so would be a bit of a kludge. I'd really like an elegant solution to this.

    Read the article

  • C#/.NET: FInd out whether a server exists, Query DNS for SVC records.

    - by TomTom
    Writing a cleint/Server tool I am tasked trying to find a server to connect to. I would love to make things as easy as possible for the user. As such, my idea is to: CHeck whether specific servers (coded by name) exist (like "mail.xxx" for a mail server, for example - my exampüle is not a mail server;) Query otherwise for DNS SVC records, allowing the admin to configure a server location for a specific serivce (that the client connects to). The result is that the user may have to enter only a domain name, possibly even not even that (using the registered standard domain of the computer in a LAN environment). Anyone ideas how: To find out whether a server exists and answers (i.e. is online) in the fastest way? TCP can take a long time if the server is not there. A UDP style ping sounds like a good idea to me. PING itself may be unavailable. Anyonw knows how to ask from withint .NET best for a SVC record in a specific (the default) domain?

    Read the article

  • SELECT set of most recent id, amount FROM table, where id occurs many times

    - by Jon Cram
    I have a table recording the amount of data transferred by a given service on a given date. One record is entered daily for a given service. I'd like to be able to retrieve the most recent amount for a set of services. Example data set: serviceId | amount | date ------------------------------- 1 | 8 | 2010-04-12 2 | 11 | 2010-04-12 2 | 14 | 2010-04-11 3 | 9 | 2010-04-11 1 | 6 | 2010-04-10 2 | 5 | 2010-04-10 3 | 22 | 2010-04-10 4 | 17 | 2010-04-19 Desired response (service ids 1,2,3): serviceId | amount | date ------------------------------- 1 | 8 | 2010-04-12 2 | 11 | 2010-04-12 3 | 9 | 2010-04-11 Desired response (service ids 2, 4): serviceId | amount | date ------------------------------- 2 | 11 | 2010-04-12 4 | 17 | 2010-04-19 This retrieves the equivalent as running the following once per serviceId: SELECT serviceId, amount, date FROM table WHERE serviceId = <given serviceId> ORDER BY date DESC LIMIT 0,1 I understand how I can retrieve the data I want in X queries. I'm interested to see how I can retrieve the same data using either a single query or at the very least less than X queries. I'm very interested to see what might be the most efficient approach. The table currently contains 28809 records. I appreciate that there are other questions that cover selecting the most recent set of records. I have examined three such questions but have been unable to apply the solutions to my problem.

    Read the article

  • Obfuscating ids in Rails app

    - by fphilipe
    I'm trying to obfuscate all the ids that leave the server, i.e., ids appearing in URLs and in the HTML output. I've written a simple Base62 lib that has the methods encode and decode. Defining—or better—overwriting the id method of an ActiveRecord to return the encoded version of the id and adjusting the controller to load the resource with the decoded params[:id] gives me the desired result. The ids now are base62 encoded in the urls and the response displays the correct resource. Now I started to notice that subresources defined through has_many relationships aren't loading. e.g. I have a record called User that has_many Posts. Now User.find(1).posts is empty although there are posts with user_id = 1. My explanation is that ActiveRecord must be comparing the user_id of Post with the method id of User—which I've overwritten—instead of comparing with self[:id]. So basically this renders my approach useless. What I would like to have is something like defining obfuscates_id in the model and that the rest would be taken care of, i.e., doing all the encoding/decoding at the appropriate locations and preventing ids to be returned by the server. Is there any gem available or does somebody have a hint how to accomplish this? I bet I'm not the first trying this.

    Read the article

  • How do I avoid a race condition in my Rails app?

    - by Cathal
    Hi, I have a really simple Rails application that allows users to register their attendance on a set of courses. The ActiveRecord models are as follows: class Course < ActiveRecord::Base has_many :scheduled_runs ... end class ScheduledRun < ActiveRecord::Base belongs_to :course has_many :attendances has_many :attendees, :through => :attendances ... end class Attendance < ActiveRecord::Base belongs_to :user belongs_to :scheduled_run, :counter_cache => true ... end class User < ActiveRecord::Base has_many :attendances has_many :registered_courses, :through => :attendances, :source => :scheduled_run end A ScheduledRun instance has a finite number of places available, and once the limit is reached, no more attendances can be accepted. def full? attendances_count == capacity end attendances_count is a counter cache column holding the number of attendance associations created for a particular ScheduledRun record. My problem is that I don't fully know the correct way to ensure that a race condition doesn't occur when 1 or more people attempt to register for the last available place on a course at the same time. My Attendance controller looks like this: class AttendancesController < ApplicationController before_filter :load_scheduled_run before_filter :load_user, :only => :create def new @user = User.new end def create unless @user.valid? render :action => 'new' end @attendance = @user.attendances.build(:scheduled_run_id => params[:scheduled_run_id]) if @attendance.save flash[:notice] = "Successfully created attendance." redirect_to root_url else render :action => 'new' end end protected def load_scheduled_run @run = ScheduledRun.find(params[:scheduled_run_id]) end def load_user @user = User.create_new_or_load_existing(params[:user]) end end As you can see, it doesn't take into account where the ScheduledRun instance has already reached capacity. Any help on this would be greatly appreciated.

    Read the article

  • Inheritance of TCollectionItem

    - by JamesB
    I'm planning to have collection of items stored in a TCollection. Each item will derive from TBaseItem which in turn derives from TCollectionItem, With this in mind the Collection will return TBaseItem when an item is requested. Now each TBaseItem will have a Calculate function, in the the TBaseItem this will just return an internal variable, but in each of the derivations of TBaseItem the Calculate function requires a different set of parameters. The Collection will have a Calculate All function which iterates through the collection items and calls each Calculate function, obviously it would need to pass the correct parameters to each function I can think of three ways of doing this: Create a virtual/abstract method for each calculate function in the base class and override it in the derrived class, This would mean no type casting was required when using the object but it would also mean I have to create lots of virtual methods and have a large if...else statement detecting the type and calling the correct "calculate" method, it also means that calling the calculate method is prone to error as you would have to know when writing the code which one to call for which type with the correct parameters to avoid an Error/EAbstractError. Create a record structure with all the possible parameters in and use this as the parameter for the "calculate" function. This has the added benefit of passing this to the "calculate all" function as it can contain all the parameters required and avoid a potentially very long parameter list. Just type casting the TBaseItem to access the correct calculate method. This would tidy up the TBaseItem quite alot compared to the first method. What would be the best way to handle this collection?

    Read the article

  • How does Access 2007's moveNext/moveFirst/, etc., feature work?

    - by Chris M
    I'm not an Access expert, but am an SQL expert. I inherited an Access front-end referencing a SQL 2005 database that worked OK for about 5000 records, but is failing miserably for 800k records... Behind the scenes in the SQL profiler & activity manager I see some kind of Access query like: SELECT "MS1"."id" FROM "dbo"."customer" "MS1" ORDER BY "MS1"."id" The MS prefix doesn't appear in any Access code I can see. I'm suspicious of the built-in Access navigation code: DoCmd.GoToRecord , , acNext The GoToRecord has AcRecord constant, which includes things like acFirst, acLast, acNext, acPrevious and acGoTo. What does it mean in a database context to move to the "next" record? This particular table uses an identity column as the PK, so is it internally grabbing all the IDs and then moving to the one that is the next highest??? If so, how would it work if a table was comprised of three different fields for the PK? Or am I on the wrong track, and something else in Access is calling that statement? Unfortunately I see a ton of prepared statements in the profiler. THanks!

    Read the article

  • How add logic to Views? Ruby on Rails

    - by Gotjosh
    Right now I'm building a project management app in rails, here is some background info: Right now i have 2 models, one is User and the other one is Client. Clients and Users have a one-to-one relationship (client - has_one and user - belongs_to which means that the foreign key it's in the users table) So what I'm trying to do it's once you add a client you can actually add credentials (add an user) to that client, in order to do so all the clients are being displayed with a link next to that client's name meaning that you can actually create credentials for that client. What i can't figure it out how to do is, that if you actually have credentials in the database (meaning that there's a record in the users table with your client id) then don't display that link. Here's what i thought that would work <% for client in @client%> <h5> <h4><%= client.id %></h4> <a href="/clients/<%= client.id %>"><%= client.name %></a> <% for user in @user %> <% if user.client_id = client.id %> <a href="/clients/<%= client.id %>/user/new">Credentials</a> <%end%> <% end %> </h5> <% end %> And here's the controller: def index @client = Client.find_all_by_admin(0) @user = User.find(:all) end but instead it just puts the link the amount of times per records in the user table. Any help?

    Read the article

  • Why do the usable sizes differ

    - by Raigex
    Again, dont really know how to phrase the question so I will explain. I have a video recorder application. I open my camera with cameraRecorder = Camera.open(1); //(this is the front facing camera) And get the camera parameters and all supported preview sizes CameraParameters tmpParams = cameraRecorder.getParameters(); List<Camera.Size> tmpList = tmpParams.getSupportedPreviewSizes(); one of the preview sizes on the Galaxy Tab 10.1 running ICS (4.0.4) is 800x600 but when I try to set the Video size in my media Player mediaRecorder.setVideoSize(800,600); I get this error: 12-19 17:27:55.035: E/CameraSource(110): Video dimension (800x600) is unsupported 12-19 17:27:55.035: E/StagefrightRecorder(110): cameraSource do not init 12-19 17:27:55.035: E/StagefrightRecorder(110): setupCameraSource failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMediaSource is failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMPEG4Recording is failed. (-19) 12-19 17:27:55.035: E/MediaRecorder(30119): start failed: -19 Does anyone know why this discrepancy might exist (I know one of the supported record sizes is 1280x720 but that is too big for me).

    Read the article

  • Why do we need Audit Columns in Database Tables?

    - by Software Enthusiastic
    Hi I have seen many database designs having following audit columns on all the tables... Created By Create DateTime Updated By Upldated DateTime From one perspective I see tables from the following view... Entity Tables: Good candidate for Audit columns) Reference Tables: Audit columns may or may not required. In some case last update information is not at all required because record is never going to be modified.) Reference Data Tables Like Country Names, Entity State etc... Audit columns may not required because these information is created only during system installation time, and never going to be changed. I have seen many designers blindly put all audit columns to all tables, is this practice good, if yes what could be the reason... I just want to know because to me it seems illogical. It is difficult for me to figure out why do they design their db this way? I am not saying they are wrong or right, just want to know the WHY? You can also suggest me, if there is an alternative auditing patter or solution available... Thanks and Regards

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >