Search Results

Search found 1505 results on 61 pages for 'postgresql 9 0'.

Page 43/61 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • Left outer joins that don't return all the rows from T1

    - by Summer
    Left outer joins should return at least one row from the T1 table if it matches the conditions. But what if the left outer join performs a join successfully, then finds that another criterion is not satisfied? Is there a way to get the query to return a row with T1 values and T2 values set to NULL? Here's the specific query, in which I'm trying to return a list of candidates, and the user's support for those candidates IF such support exists. SELECT c.id, c.name, s.support FROM candidates c LEFT JOIN support s on s.candidate_id = c.id WHERE c.office_id = 5059 AND c.election_id = 92 AND (s.user_id = 2 OR s.user_id IS NULL) --This line seems like the problem ORDER BY c.last_name, c.name The query joins the candidates and support table, but finds that it's a different user who supported this candidate (user_id=3, say). Then the candidate disappears entirely from the result set.

    Read the article

  • Getting a date value in a postgres table column and check if it's bigger than todays date

    - by Roland
    I have a Postgres table called clients. The name column contains certain values eg. test23233 [987665432,2014-02-18] At the end of the value is a date, I need to compare this date, and return all records where this specific date is younger than today I tried select id,name FROM clients where name ~ '(\d{4}\-\d{1,2}\-\d{1,2})'; but this isn't returning any values. How would I go about to achieve the results I want?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Encoding in SQL to CSV

    - by Z77
    When do I execute query COPY TO ... CSV, I create CSV file. BUt when open it column with names in excel that should be with national characters are not as it should be. So my question is, If it is possible within a sql query to change this encoding to utf8? Or something else? Because I want that new created CSV file to be as final product for user on web. I hope someone understood what I want:)

    Read the article

  • postgres too slow

    - by Killercode
    Hi, I'm doing massive tests on a Postgres database... so basically I have 2 table where I inserted 40.000.000 records on, let's say table1 and 80.000.000 on table2 after this I deleted all those records. Now if I do SELECT * FROM table1 it takes 199000ms ? I can't understand what's happening? can anyone help me on this?

    Read the article

  • Postgres turn on log_statement programmatically

    - by rwallace
    I want to turn on logging of all SQL statements that modify the database. I could get that on my own machine by setting the log_statement flag in the configuration file, but it needs to be enabled on the user's machine. How do you enable it from program code? (I'm using Python with psycopg2 if it matters.)

    Read the article

  • 2 Rails Apps, 1 Database (using Heroku)

    - by Paul A.
    I've made 2 apps, App A and App B. App A's sole purpose is to allow users to sign up and App B's purpose is to take select users from App A email them. Since App A & B were created independently & are hosted in 2 separate Heroku instances, how can App B access the users database in App A? Is there a way to push certain relevant rows from App A to App B?

    Read the article

  • Heroku Postgres Error: PGError: ERROR: relation "organizations" does not exist (ActiveRecord::StatementInvalid)

    - by Mark
    I'm having a problem deploying my Rails app to Heroku, where this error is thrown when trying to access the app: PGError: ERROR: relation "organizations" does not exist (ActiveRecord::StatementInvalid) SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"organizations"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum Anybody have any ideas? This is a first for me, especially because I've been working with Heroku for a year on other apps, and haven't see anything like this. Of course, everything works on local SQLite. Thanks in advance for any help! --Mark

    Read the article

  • Update int array based on parent

    - by Pickels
    I have the following int[] in my database: '{0}' '{0,0}' '{0,0,0}' '{0,0,0,0}' This column is used to sort my tree data. Now when a parent updates it's order the children should also update. For example if the second record updates it's order to 1 it should result in the following. '{0}' '{0,1}' '{0,1,0}' '{0,1,0,0}' So I was wondering what the query would be to update record 3 and 4. In case it's not clear what I am asking leave a comment I can add additional information. Screenshot of my actual data:

    Read the article

  • Store the day of the week and time

    - by bsiddiqui
    I have a two part question about storing day(s) of the week and time in a database. I'm using Rails 4.0, ruby 2.0.0, and postgres. I have certain events and those events have a schedule. For the event Skydiving for example, for example, I might have Tuesday and Wednesday and 3 pm. 1) Is there a way for me to store the the record for Tuesday and Wednesday in one row or do should I have two records? 2) What is the best way to store the day and time? Is there a way to store day of week and time (not datetime) or should these be separate columns? If they should be separate, how would you store day of week? I was thinking of storing them as integer values (0 for Sunday, 1 for Monday, etc) since that's how wday method for the Time class does it. Any suggestions would be super helpful. Thanks!

    Read the article

  • hibernate - Postgres- target lists can have at most 1664 entries

    - by Vineyard
    We are using hibernate, postgres 8.3x Our entities are many to one mapped with eager fetching. We have multiple associations with Many to one mapping. As we added new columns to any other existing entities, We are getting below error: target lists can have at most 1664 entries I searched internet and they say this is due to More number of select statements in sql query (generated by hibernate) Can you any body please let us know if there is any configuration (in postgres) to update max number columns in configuration or any other solution to solve this issue. Thank you in advance.

    Read the article

  • Indexing affects only the WHERE clause?

    - by andre matos
    If I have something like: CREATE INDEX idx_myTable_field_x ON myTable USING btree (field_x); SELECT COUNT(field_x), field_x FROM myTable GROUP BY field_x ORDER BY field_x; Imagine myTable with around 500,000 rows and most of field_x values being unique. Since I don't use any WHERE clause, will the created index have any effect at all in my query? Edit: I'm asking this question because I don't get any relevant difference between query-times before and after creating the index; They always take about 8 seconds (which, of course is too much time!). Is this behaviour expected?

    Read the article

  • Testing stored procedures

    - by giri
    Hi , How to test procedures with record type parameters.I have a procedure which takes test_ap ,basic and user_name as inputs.where test_ap is of record/row type,basic record array type and user_name charater varying. I need to test the procedure in pgadmin. test_client(test_ap test_base, basic test_base_detail[], user_name character varying) Any suggestions plz.

    Read the article

  • Default tablespace for indexes in postgres

    - by tom
    Just wondering if its possible to set a default tablespace in postgres to keep indexes. Would like the databases to live on the default tablespace for postgres, however, would like to get the indexes on a different set of disks just to keep the i/o traffic separated. It does not appear to me that it can be done without going in and doing an ALTER index TABLESPACE command, and then the index is moved and will stay there, but the databases and indexes are part of a django app, so non-django intervention can cause some problems.

    Read the article

  • how to declare variable in POSTGRE

    - by user307880
    I try to declare a variable in a code like this, but it's doesn't work. Can you tell me what's the problem? ERROR: syntax error at or near "VARCHAR" LINE 2: p_country VARCHAR; DECLARE p_country VARCHAR; p_country : = ''; SELECT p_country;

    Read the article

  • Join using combined conditions on one join table

    - by Nathan Wienert
    I have join a table joining songs to genres. The table has a 'source' column that's used to identify where the genre was found. Genres are found from blogs, artists, tags, and posts. So, songs | song_genre | genres id | song_id, source, genre_id | id What I want to build is a song SELECT query that works something like this, given I already have a genre_id: IF exists song_genre with source='artist' AND a song_genre with source='blog' OR exists song_genre with source='artist' AND a song_genre with source='post' OR exists song_genre with source='tag' I'm was going to do it by doing a bunch of joins, but am sure I'm not doing it very well. Using Postgres 9.1.

    Read the article

  • Operation precedence on postgress

    - by user24691
    I have set new division on postgress pg_operator table because i want tath when is division by zero return 0. i have write this: create operator / ( procedure = zero_division, leftarg = double precision, rightarg = double precision); where zero_division is: CREATE OR REPLACE FUNCTION zero_division(double precision, double precision) RETURNS double precision AS 'select case when $2 = 0 then 0 else $1 / $2::real end;' LANGUAGE sql IMMUTABLE COST 100; when i run value/ 0 i get error of division.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >