Search Results

Search found 11198 results on 448 pages for 'ruby postgresql'.

Page 60/448 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • PostgreSQL function to iterate through/act on many rows with state

    - by Claudiu
    I have a database with columns looking like: session | order | atype | amt --------+-------+-------+----- 1 | 0 | ADD | 10 1 | 1 | ADD | 20 1 | 2 | SET | 35 1 | 3 | ADD | 10 2 | 0 | SET | 30 2 | 1 | ADD | 20 2 | 2 | SET | 55 It represents actions happening. Each session starts at 0. ADD adds an amount, while SET sets it. I want a function to return the end value of a session, e.g. SELECT session_val(1); --returns 45 SELECT session_val(2); --returns 55 Is it possible to write such a function/query? I don't know how to do any iteration-like things with SQL, or if it's possible at all.

    Read the article

  • String literals and escape characters in postgresql

    - by rjohnston
    Attempting to insert an escape character into a table results in a warning. For example: create table EscapeTest (text varchar(50)); insert into EscapeTest (text) values ('This is the first part \n And this is the second'); Produces the warning: WARNING: nonstandard use of escape in a string literal (Using PSQL 8.2) Anyone know how to get around this?

    Read the article

  • PHP syntax for postgresql Mixed-case table names

    - by yam
    I have a code below: <?php require "institution.php" /* in this portion, query for database connection is executed, and */ $institution= $_POST['institutionname']; $sCampID = 'SELECT ins_id FROM institution where ins_name= '$institution' '; $qcampID = pg_query($sCampID) or die("Error in query: $query." . pg_last_error($connection)); /* this portion outputs the ins_id */ ?> My database before has no mixed-case table names, that's why when I run this query, it shows no error at all. But because I've changed my database for some reasons, and it contains now mixed-case table names, i have to change the code above into this one: $sCampID = 'SELECT ins_id FROM "Institution" where ins_name= '$institution' '; where the Institution has to be double quoted. The query returned parse error. When i removed this portion: where ins_name= '$institution', no error occured. My question is how do I solve this problem where the table name which contains a mixed-case letter and a value stored in a variable ($institution in this case) will be combined in a single select statement? Your answers and suggestions will be very much appreciated.

    Read the article

  • Strange postgresql behavior

    - by Sergey
    Hi can someone explain me why it works like this? = select client_id from clients_to_delete; ERROR: column "client_id" does not exist at character 8 but, when putting this inside an IN()... = select * from orders where client_id in(select client_id from clients_to_delete); it works! and select all rows in the orders table. Same when running delete/update. Why it doesn't produce an error like before? Thank you!

    Read the article

  • Postgresql full text search part of words

    - by Grezly
    Is postresql capable of doing a full text search, based on 'half' a word? For example i'm trying to seach foor tree, but i tell postgres to search for 'tr'. I can't find such a solution that is capable of doing this. Currently i'm using this select * from test, to_tsquery('tree') as q where vectors @@ q ; But i like to do something like this: select * from test, to_tsquery('tr%') as q where vectors @@ q ;

    Read the article

  • Return multiple results using dynamic sql (postgresql 8.2)

    - by precose
    I want to loop through schemas and get a result set that looks like this: Count 5 834 345 34 984 However, I can't get it to return anything using dynamic sql...I've tried everything but 8.2 is being a real pain. Here is my function: CREATE OR REPLACE FUNCTION dwh.adam_test4() RETURNS void LANGUAGE plpgsql AS $function$ DECLARE myschema text; rec RECORD; BEGIN FOR myschema IN select distinct c.table_schema, d.p_id from information_schema.tables t inner join information_schema.columns c on (t.table_schema = t.table_schema and t.table_name = c.table_name) join dwh.sgmt_clients d on c.table_schema = lower(d.userid) where c.table_name = 'fact_members' and c.column_name = 'debit_card' and t.table_schema NOT LIKE 'pg_%' and t.table_schema NOT IN ('information_schema', 'ad_delivery', 'dwh', 'users', 'wand', 'ttd') order by table_schema LOOP EXECUTE 'select count(ucic) from '|| myschema || '.' ||'fact_members where debit_card = ''yes''' into rec; RETURN rec; END LOOP; END $function$

    Read the article

  • Stored Procedure in postgresql, multiple queries w/ agreggates.

    - by fenix
    I'm trying to write a store procedure that can take some input parameters (obviously), run multiple queries against those, taking the output from those and doing calculations, and from those calculations and the original queries, outputting a formatted text string like: Number of Rows for max(Z) matching condition x and y of total rows matching x (x&y/x*100). To explain the max(Z) bit, this will be the username field, it won't matter which actual entry is picked, because the where clause will filter the results by user id, is there a saner way to do this?

    Read the article

  • IntegrityError with Booleand Fields and Postgresql

    - by xRobot
    I have this simple Blog model: class Blog(models.Model): title = models.CharField(_('title'), max_length=60, blank=True, null=True) body = models.TextField(_('body')) user = models.ForeignKey(User) is_public = models.BooleanField(_('is public'), default = True) When I insert a blog in admin interface, I get this error: IntegrityError at /admin/blogs/blog/add/ null value in column "is_public" violates not-null constraint Why ???

    Read the article

  • Querying Postgresql with a very large result set

    - by sanity
    In an application I need to query a Postgres DB where I expect tens or even hundreds of millions of rows in the result set. I might do this query once a day, or even more frequently. The query itself is relatively simple, although may involve a few JOINs. My question is: How smart is Postgres with respect to avoiding having to seek around the disk for each row of the result set? Given the time required for a hard disk seek, this could be extremely expensive. If this isn't an issue, how does Postgres avoid it? How does it know how to lay out data on the disk such that it can be streamed out in an efficient manner in response to this query?

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • Beginner to RUBY - require_relative problem

    - by WANNABE
    Hi, I'm learning Ruby (using version 1.8.6) on Windows 7. When I try to run the stock_stats.rb program below, I get the following error: C:\Users\Will\Desktop\ruby>ruby stock_stats.rb stock_stats.rb:1: undefined method `require_relative' for main:Object (NoMethodE rror) I have three v.small code files: stock_stats.rb require_relative 'csv_reader' reader = CsvReader.new ARGV.each do |csv_file_name| STDERR.puts "Processing #{csv_file_name}" reader.read_in_csv_data(csv_file_name) end puts "Total value = #{reader.total_value_in_stock}" csv_reader.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end book_in_stock.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end Thanks in advance for any help.

    Read the article

  • SQLite3-ruby extremely slow under 1.9.1?

    - by NilObject
    I decided to upgrade my server to Ruby 1.9.1, and a lot of things are indeed much faster. However, I have a process that dumps a database to sqlite, and it's become glacially slow. What used to take 30 seconds now takes upwards of 10 minutes. The code does several create table statements, and then lots of inserts. The insert statements nearly all use placeholders (?), so SQLite is doing the heavy lifting of binding the parameters. In short, I can't see why this particular usage has slowed down so much. Does anyone know of any problems that have caused it? I'm using sqlite3-ruby (1.2.5), and I'm hoping that someone has encountered this and profiled it. If not, I guess I'm going to learn how to profile ruby code :)

    Read the article

  • Multiple delivery confirmations per message with Ruby SMPP

    - by Santiago Palladino
    I am using the ruby smpp library to send/receive SMS. Right now we are sending messages to two different servers, using the ruby-smpp library. One of them works perfectly, but the other one sends multiple DELIVRD confirmations for each messages. And by multiple I mean hundreds of confirmations per message in some cases. Does anyone know any possible reason behind this? I am thinking on something relative to the implementation of the protocol the company is using, since it works perfectly with the other one, and not on the lines of a bug in the specific smpp ruby library. We are using smpp v3.4.

    Read the article

  • Ruby on rails - Radrails IDE - mysql issues.

    - by ThomasReggi
    I have been trying to get Ruby on Rails to migrate a database for the good part of today, the problems all seem to result with this issue, can someone please help! If its a radrails specific problem I guess Ill take this to their forums. Something is telling me this is an easy fix. >rake db:migrate (in C:/Users/Thomas/My Documents/Aptana RadRails Workspace/rp) !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! 126: The specified module could not be found. - C:/Ruby/lib/ruby/gems/1.8/gems/mysql-2.8.1-x86-mswin32/lib/1.8/mysql_api.so (See full trace by running task with --trace)

    Read the article

  • Shortest Ruby Quine

    - by AaronThomson
    Just finished reading this blog post: http://www.skorks.com/2010/03/an-interview-question-that-prints-out-its-own-source-code-in-ruby/ In it, the author argues the case for using a quine as an interview question. I'm not sure I agree but thats not what this question is about. He goes on to construct a quine in Ruby and refactor it to make it shorter. He then challenges the reader to try to make it even shorter. I played around with it for a while and came up with the following: s="s=;puts s[0,2]+34.chr+s+34.chr+s[2,36]";puts s[0,2]+34.chr+s+34.chr+s[2,36] This is the first time I have ever attempted a quine and I can't figure out how to make it any shorter. What is the shortest Ruby quine you can come up with? Please post an explanation if your implementation requires it.

    Read the article

  • How to parse AMF data in Ruby?

    - by Matchu
    So I see that there are a few Rails plugins for serving AMF. However, is there a library that I can use in a Ruby environment to act as an AMF client: to read AMF data, and deserialize it into a Ruby object? If not, how could I best go about using tools built in other languages? I suppose I could write something in Python or Java or whatever, and call it from Ruby directly via backticks... but I'd first like to ensure that there isn't really any better option. Thanks!

    Read the article

  • installing ruby/gnome2 on ruby1.9

    - by sawa
    My purpose is to install ruby/gnome2 and make it work with ruby1.9 on Ubuntu9.10. I already have ruby/gnome2 working with ruby1.8, but I need to make it work with ruby1.9. I also have ruby1.9 working. When I run within ruby-gnome2-all-0.19.3: ruby1.9 extconf.rb it eventually gives me: Target libraries: glib, gdkpixbuf, pango, atk, gtk, gconf, libglade Ignored libraries: gnomeprintui, panel-applet, gtksourceview, gtksourceview2, bonoboui, bonobo, libart, goocanvas, rsvg, gnomeprint, gstreamer, vte, gnomevfs, poppler, gnomecanvas, gtkglext, gnome, gtkmozembed, gtkhtml2 so it seems some packages failed to be installed. When I look for the log for example for the gnomeprintui part, it exits after returning: cheking for libgnomeprinrui-2.2... no but apt-get says I have the newest version of it. Can anyone tell me how to resolve this problem?

    Read the article

  • How to log with Ruby and eventmachine?

    - by Justin
    I'm writing an application using Ruby and the Eventmachine library. I really like the idea of non blocking I/O and event driven systems, the problem I'm running into is logging. I'm using Ruby's standard logger library. Its not that logging takes forever but it seems like something that shouldn't block and it does. Is there a library out there somewhere that extends Ruby's standard logger implementation to be non-blocking or should I just call EM::defer for my logging calls? Is there a way I can make eventmachine do this for me already?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >