Search Results

Search found 347 results on 14 pages for 'timestamps'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Rails uniqueness constraint and matching db unique index for null column

    - by Dave
    I have the following in my migration file def self.up create_table :payment_agreements do |t| t.boolean :automatic, :default => true, :null => false t.string :payment_trigger_on_order t.references :supplier t.references :seller t.references :product t.timestamps end end I want to ensure that if a product_id is specified it is unique but I also want to allow null so I have the following in my model: validates :product_id, :uniqueness => true, :allow_nil => true Works great but I should then add an index to the migration file add_index :payment_agreements, :product_id, :unique => true Obviously this will throw an exception when two null values are inserted for product_id. I could just simply omit the index in the migration but then there's the chance that I'll get two PaymentAgreements with the same product_id as shown here: Concurrency and integrity My question is what is the best/most common way to deal with this problem

    Read the article

  • Using Yii framework, what would be a right way to process an attribute before displaying it using CH

    - by Karolis
    Say I have this line of code in the view. <?php echo CHtml::activeTextField($model,'start_time'); ?> start_time is a UNIX timestamp. When just displaying it in a view I can apply functions like date() on it. Where should I apply the formatting when I'm displaying it in a form, by using the above line of code? (This case of timestamps/dates might be special, but I'm also interested in how would one go about if it wasn't a date and I just want to work with "value in database" < "different representations of the value in different views. Thanks =)

    Read the article

  • MongoMapper - undefined method `keys'

    - by nimnull
    I'm trying to create a Document instance with params passed from the post-submitted form: My Mongo mapped document looks like: class Good include MongoMapper::Document key :title, String key :cost, Float key :description, String timestamps! many :attributes validates_presence_of :title, :cost end And create action: def create @good = Good.new(params[:good]) if @good.save redirect_to @good else render :new end end params[:good] containes all valid document attributes - {"good"={"cost"="2.30", "title"="Test good", "description"="Test description"}}, but I've got a strange error from rails: undefined method `keys' for ["title", "Test good"]:Array My gem list: *** LOCAL GEMS *** actionmailer (2.3.8) actionpack (2.3.8) activerecord (2.3.8) activeresource (2.3.8) activesupport (2.3.8) authlogic (2.1.4) bson (1.0) bson_ext (1.0) compass (0.10.1) default_value_for (0.1.0) haml (3.0.6) jnunemaker-validatable (1.8.4) mongo (1.0) mongo_ext (0.19.3) mongo_mapper (0.7.6) plucky (0.1.1) rack (1.1.0) rails (2.3.8) rake (0.8.7) rubygems-update (1.3.7) Any suggestions how to fix this error?

    Read the article

  • Convert local time (10 digit number) to a readable datetime format

    - by djerry
    Hey all, I'm working with pbx for voip calls. One aspect of pbx is that you can choose to receive CDR packages. Those packages have 2 timestamps : "utc" and "local", but both seem to always be the same. Here's an example of a timestamp : "1268927156". At first sight, there seems to be no logic in it. So i tried converting it several ways, but with no good result. That value should provide a time around 11am (+1GMT) today. Things i tried: Datetime dt = new Datetime(number); Timespan ts = new Timespan(number); DateTime utc = new DateTime(number + 504911232000000000, DateTimeKind.Utc) and some others i can't remember right now. Am i missing something stupid here? Thanks in advance

    Read the article

  • MySQL - Finding time overlaps

    - by Jude
    Hi, I have 2 tables in the database with the following attributes: Booking ======= booking_id booking_start booking_end resource_booked =============== booking_id resource_id The second table is an associative entity between "Booking" and "Resource" (i.e., 1 booking can contain many resources). Attributes booking_start and booking_end are timestamps with date and time in it. May I know how I might be able to find out for each resource_id (resource_booked) if the date/time overlaps or clashes with other bookings of similar resource_id? I was doodling the answer on paper, pictorially, to see if it might help me visualize how I could solve this and I got this: Joining the 2 tables (Booking, Booked_resource) into one table with the 4 attributes needed. Follow the answer suggested here : http://stackoverflow.com/questions/689458/find-overlapping-date-time-rows-within-one-table I did step 1 but step 2 is leaving me baffled! I would really appreciate any help on this! Thanks!

    Read the article

  • Excel Question: I need a date and time formula to convert between time zones

    - by Harold Nottingham
    Hello, I am trying to find a way to calculate a duration in days between my, time zone (Central), and (Pacific; Mountain; Eastern). Just do not know where to start. My criteria would be as follows: Cell C5:C100 would be the timestamps in this format:3/18/2010 23:45 but for different dates and times. Cell D5:D100 would be the corresponding timezone in text form: Pacific; Mountain; Eastern; Central. Cell F5 would be where the duration in days would need to be. Just not sure how to write the formula to give me what I am looking for. I appreciate any assistance in advance. Thanks

    Read the article

  • trying to use ActiveRecord with Sinatra, Migration fails question

    - by David Lazar
    Hi, running Sinatra 1.0, I wanted to add a database table to my program. In my Rakefile I have a task task :environment do ActiveRecord::Base.establish_connection(YAML::load(File.open('config/database.yml'))["development"]) end I have a migration task in my namespace that calls the migration code: namespace :related_products do desc "run any migrations we may have in db/migrate" task :migrate => :environment do ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) end My console pukes out an error when the call to ActiveRecord::MIgrator.migrate() is made. rake aborted! undefined method `info' for nil:NilClass The migration code itself is pretty simple... and presents me with no clues as to what this missing info class is. class CreateStores < ActiveRecord::Migration def self.up create_table :stores do |t| t.string :name t.string :access_url t.timestamps end end def self.down drop_table :stores end end I am a little mystified here and am looking for some clues as to what might be wrong. Thanks!

    Read the article

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

  • How to do a range query

    - by Walter H
    I have a bunch of numbers timestamps that I want to check against a range to see if they match a particular range of dates. Basically like a BETWEEN .. AND .. match in SQL. The obvious data structure would be a B-tree, but while there are a number of B-tree implementations on CPAN, they only seem to implement exact matching. Berkeley DB has the same problem; there are B-tree indices, but no range matching. What would be the simplest way to do this? I don't want to use an SQL database unless I have to. Clarification: I have a lot of these, so I'm looking for an efficient method, not just grep over an array.

    Read the article

  • How to pre-check checkboxes in formtastic

    - by trustfundbaby
    I have a form I'm trying to set up ... Users can have many posts, and each post can have many people watching it. The Watch model is set up polymorphically as 'watchable' so it can apply to different types of models. It has a user_id, watchable_id, watchable_type and timestamps as attributes/fields. This is soley so that when people comment on a post, users watching the post can get an email about it. What I'm trying to do is show the user a list of users that they can tag on each post, which is not problem. This is what I'm using right now http://pastie.org/940421 The problem with this, is that when you go to edit an update/post ... all the checkboxes are prechecked ... I want it to pre-check only users who are currently watching the post.

    Read the article

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • CouchDB Map/Reduce raises execption in reduce function?

    - by fuzzy lollipop
    my view generates keys in this format ["job_id:1234567890", 1271430291000] where the first key element is a unique key and the second is a timestamp in milliseconds. I run my view with this elapsed_time?startkey=["123"]&endkey=["123",{}]&group=true&group_level=1 and here is my reduce function, the intention is to reduce the output to get the earliest and latest timestamps and return the difference between them and now function(keys,values,rereduce) { var now = new Date().valueOf(); var first = Number.MIN_VALUE; var last = Number.MAX_VALUE; if (rereduce) { first = Math.max(first,values[0].first); last = Math.min(last,values[0].last); } else { first = keys[0][0][1]; last = keys[keys.length-1][0][1]; } return {first:now - first, last:now - last}; } and when processing a query it constantly raises the following execption: function raised exception (new TypeError("keys has no properties", "", 1)) I am making sure not to reference keys inside my rereduce block. Why does this function constantly raise this exception?

    Read the article

  • Help with Ruby Date Compare

    - by Kevin
    Yes, I've read and done teh Google many times but I still can't get this working... maybe I'm an idiot :) I have a system using tickets. Start date is "created_at" in the timestamps. Each ticket closes 7 days after "created_at". In the model, I'm using: def closes (self.created_at + 7.days) end I'm trying to create another method that will take "closes" and return it as how many days, hours, minutes, and seconds are left before the ticket closes. Anyone want to help and/or admonish my skills? ;)

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • Why do external Java libraries paths have to be refreshed in eclipse between starts?

    - by Jason
    For my school projects, I use the joda-time API for generating timestamps that are used for file names and logging. Getting eclipse to recognize the library is no problem. However, when I start eclipse between reboots, I get the red X in the project tree because the external API cannot be found, even though the file path has not changed. So each time, I have to go to the Libraries tab in Build Path to re-target the API. Frankly, this is getting to be a PIMA. So is there any way to have the path be permanent, so I don't have to do the Build Path rigmarole all the time?

    Read the article

  • Can you explain what's going on in this Ruby code?

    - by samoz
    I'm trying to learn Ruby as well as Ruby on Rails right now. I'm following along with Learning Rails, 1st edition, but I'm having a hard time understanding some of the code. I generally do work in C, C++, or Java, so Ruby is a pretty big change for me. I'm currently stumped with the following block of code for a database migrator: def self.up create_table :entries do |t| t.string :name t.timestamps end end Where is the t variable coming from? What does it actually represent? Is it sort of like the 'i' in a for(i=0;i<5;i++) statement? Also, where is the :entries being defined at? (entries is the name of my controller, but how does this function know about that?)

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • Git to SVN trouble

    - by Kevin
    My boss has a Perforce repository for which he wants to make a read-only copy available on Sourceforge via subversion. He had a perl script which would do this but it's no longer functioning (we don't want to try debugging it yet) and it's really not that great anyway. So an alternate solution is to pull the perforce repo into git as a remote ref, which I have already done successfully (including all the proper commit details and authors), now the trouble I'm having is pushing it out to a separate SVN repository. I can make it start the commit process with "git svn dcommit --add-author-from", but the problem is even though the correct author appears at the end of the commit message the "real" author committing is my machine's user. I want to preserve the real author with the commit, and I'd also like to preserve the original timestamps as well. Is anyone familiar with how I could accomplish this?

    Read the article

  • Version control of Mathematica notebooks

    - by Etaoin
    Mathematica notebooks are, of course, plaintext files -- it seems reasonable to expect that they should play nice with a version-control system (git in my case, although I doubt the specific system matters). But the fact is that any .nb file is full of cache information, timestamps, and other assorted metadata. Scads of it. Which means that limited version control is possible -- commits and rollbacks work fine. Merging, though, is a disaster. Mathematica won't open a file with merge markers in it, and a text editor is no way to go through a .nb file. Has anyone had any luck putting a notebook under version control? How?

    Read the article

  • Using gmail as SMTP server in Java web app is slow

    - by Annie
    Hi, I was wondering if anyone might be able to explain to me why it's taking nearly 30 seconds each time my Java web app sends an email using Gmail's SMTP server? See the following timestamps: 13/04/2010-22:24:27:281 DEBUG test.service.impl.SynchronousEmailService - Before sending mail. 13/04/2010-22:24:52:625 DEBUG test.service.impl.SynchronousEmailService - After sending mail. I'm using spring's JavaMailSender class with the following settings: email.host=smtp.gmail.com [email protected] email.password=mypassword email.port=465 mail.smtp.auth.required=true Note that the mail is getting sent and I'm receiving it fine, there's just this delay which is resulting in a slow experience for the application user. If you know how I can diagnose the problem myself that would be good too :)

    Read the article

  • Group MySQL Data into Arbitrarily Sized Time Buckets

    - by Eric J.
    How do I count the number of records in a MySQL table based on a timestamp column per unit of time where the unit of time is arbitrary? Specifically, I want to count how many record's timestamps fell into 15 minute buckets during a given interval. I understand how to do this in buckets of 1 second, 1 minute, 1 hour, 1 day etc. using MySQL date functions, e.g. SELECT YEAR(datefield) Y, MONTH(datefield) M, DAY(datefield) D, COUNT(*) Cnt FROM mytable GROUP BY YEAR(datefield), MONTH(datefield), DAY(datefield) but how can I group by 15 minute buckets?

    Read the article

  • php - comparing timestamp dates to make sure user is of minimum age

    - by Micheal Ken
    When a user signs up the system has to check that they are old enough to do so, in this example they have to be atleast 8 years old $minAge = strtotime(date("d")."-".date("m")."-".(date("Y")-8)); $dob = strtotime($day."-".$month."-".$year); $minAge = 01-03-2004, $dob = 01-02-2011 I basically need to make sure this person was born before 2004 but I want to know whether I have to convert the timestamps to do a comparison or whether there is a more efficient way. Any help is appreciated, thank you

    Read the article

  • How best to debug Delphi using the IDE and/or FOSS?

    - by LeonixSolutions
    I am currently using Delphi 7 and unsure whether to upgrade. I see the following means of debugging and wonder if there are others or which FOSS tools a small company can use (we don't do much Windows programming). 1 Debug in the IDE, by setting breakpoints, using watches, etc 2 Debug in the IDE, by using the Event Log I got some good info from this page and tweaked it to add timestamps and indent/outdent on procedure call/return, so that I can see nested calls more quickly. Does anyone know of anything better ? 3 Using a profiler 4 Any others? Such as MadExcept, etc?

    Read the article

  • Whats the best data-structure for storing 2-tuple (a, b) which support adding, deleting tuples and c

    - by bhups
    Hi So here is my problem. I want to store 2-tuple (key, val) and want to perform following operations: - keys are strings and values are Integers - multiple keys can have same value - adding new tuples - updating any key with new value (any new value or updated value is greater than the previous one, like timestamps) - fetching all the keys with values less than or greater than given value - deleting tuples. Hash seems to be the obvious choice for updating the key's value but then lookups via values will be going to take longer (O(n)). The other option is balanced binary search tree with key and value switched. So now lookups via values will be fast (O(lg(n))) but updating a key will take (O(n)). So is there any data-structure which can be used to address these issues? Thanks.

    Read the article

  • How can I get the last-modified time with python3 urllib?

    - by Daenyth
    I'm porting over a program of mine from python2 to python3, and I'm hitting the following error: AttributeError: 'HTTPMessage' object has no attribute 'getdate' Here's the code: conn = urllib.request.urlopen(fileslist, timeout=30) last_modified = conn.info().getdate('last-modified') This section worked under python 2.7, and so far I haven't been able to find out the correct method to get this information in python 3.1. The full context is an update method. It pulls new files from a server down to its local database, but only if the file on the server is newer than the local file. If there's a smarter way to achieve this functionality than just comparing local and remote file timestamps, then I'm open to that as well.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >