Search Results

Search found 3721 results on 149 pages for 'postgresql newbie'.

Page 54/149 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • problem with TEMPORARY TABLE

    - by Z77
    Within PHP I do: 1.) A temporary table is created: CREATE TEMP TABLE new_table AS SELECT .... FROM ...; 2.) AFter that I want to use this table to create a shape file: shell_exec ("pgsql2shp .... -u username -P password ...); Separetly those two things work, but by creating a temporary table and after that using this table in pgsql2shp does not work. I pressume this is because temporary table duration is to the end of session. But to create shp file I need to use username and password what means new session starts and temporary table is dropped before I use it for shape creation. Any tip how to solve it? Thank you!

    Read the article

  • How to find all points away from some polygon?

    - by Z77
    What I need is to find all points away from rectangle for 10km. Points geometry is the_geom1, rectangles (polygon) geometry is the_geom2. SRID of them is 4258. I tried: SELECT * FROM table1,table2 WHERE ST_DWithin(table1.the_geom1,table2.the_geom2,10000) and table1.gid=2; But the result is not Ok.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

  • Getting a date value in a postgres table column and check if it's bigger than todays date

    - by Roland
    I have a Postgres table called clients. The name column contains certain values eg. test23233 [987665432,2014-02-18] At the end of the value is a date, I need to compare this date, and return all records where this specific date is younger than today I tried select id,name FROM clients where name ~ '(\d{4}\-\d{1,2}\-\d{1,2})'; but this isn't returning any values. How would I go about to achieve the results I want?

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • Postgres turn on log_statement programmatically

    - by rwallace
    I want to turn on logging of all SQL statements that modify the database. I could get that on my own machine by setting the log_statement flag in the configuration file, but it needs to be enabled on the user's machine. How do you enable it from program code? (I'm using Python with psycopg2 if it matters.)

    Read the article

  • postgres - group by on multiple columns - master/detail type table

    - by smpillay
    I have a table order(orderid, ordernumber, clientid, orderdesc etc.,) and a corresponding status for that order on an order_status table ( statusid, orderid, statusdesc, statusNote, statustimestamp) say I have a record in order as below orderid orderumber clientid orderdesc 1111 00980065 ABC blah..blah.. and a corresponding status entries statusid orderid statusdesc statusNote statustimestamp 11 1111 recvd status blah yyyy-mm-dd:10:00 12 1111 clientproce status blah yyyy-mm-dd:11:00 13 1111 clientnotice status blah yyyy-mm-dd:15:00 14 1111 notified status blah yyyy-mm-dd:17:00 How can I get the following result (latest timestamp along with multiple columns) 1111 14 00980065 ABC blah..blah.. notified status blah yyyy-mm-dd:17:00

    Read the article

  • Indexing affects only the WHERE clause?

    - by andre matos
    If I have something like: CREATE INDEX idx_myTable_field_x ON myTable USING btree (field_x); SELECT COUNT(field_x), field_x FROM myTable GROUP BY field_x ORDER BY field_x; Imagine myTable with around 500,000 rows and most of field_x values being unique. Since I don't use any WHERE clause, will the created index have any effect at all in my query? Edit: I'm asking this question because I don't get any relevant difference between query-times before and after creating the index; They always take about 8 seconds (which, of course is too much time!). Is this behaviour expected?

    Read the article

  • Heroku Postgres Error: PGError: ERROR: relation "organizations" does not exist (ActiveRecord::StatementInvalid)

    - by Mark
    I'm having a problem deploying my Rails app to Heroku, where this error is thrown when trying to access the app: PGError: ERROR: relation "organizations" does not exist (ActiveRecord::StatementInvalid) SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"organizations"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum Anybody have any ideas? This is a first for me, especially because I've been working with Heroku for a year on other apps, and haven't see anything like this. Of course, everything works on local SQLite. Thanks in advance for any help! --Mark

    Read the article

  • Encoding in SQL to CSV

    - by Z77
    When do I execute query COPY TO ... CSV, I create CSV file. BUt when open it column with names in excel that should be with national characters are not as it should be. So my question is, If it is possible within a sql query to change this encoding to utf8? Or something else? Because I want that new created CSV file to be as final product for user on web. I hope someone understood what I want:)

    Read the article

  • Store the day of the week and time

    - by bsiddiqui
    I have a two part question about storing day(s) of the week and time in a database. I'm using Rails 4.0, ruby 2.0.0, and postgres. I have certain events and those events have a schedule. For the event Skydiving for example, for example, I might have Tuesday and Wednesday and 3 pm. 1) Is there a way for me to store the the record for Tuesday and Wednesday in one row or do should I have two records? 2) What is the best way to store the day and time? Is there a way to store day of week and time (not datetime) or should these be separate columns? If they should be separate, how would you store day of week? I was thinking of storing them as integer values (0 for Sunday, 1 for Monday, etc) since that's how wday method for the Time class does it. Any suggestions would be super helpful. Thanks!

    Read the article

  • hibernate - Postgres- target lists can have at most 1664 entries

    - by Vineyard
    We are using hibernate, postgres 8.3x Our entities are many to one mapped with eager fetching. We have multiple associations with Many to one mapping. As we added new columns to any other existing entities, We are getting below error: target lists can have at most 1664 entries I searched internet and they say this is due to More number of select statements in sql query (generated by hibernate) Can you any body please let us know if there is any configuration (in postgres) to update max number columns in configuration or any other solution to solve this issue. Thank you in advance.

    Read the article

  • 2 Rails Apps, 1 Database (using Heroku)

    - by Paul A.
    I've made 2 apps, App A and App B. App A's sole purpose is to allow users to sign up and App B's purpose is to take select users from App A email them. Since App A & B were created independently & are hosted in 2 separate Heroku instances, how can App B access the users database in App A? Is there a way to push certain relevant rows from App A to App B?

    Read the article

  • Update int array based on parent

    - by Pickels
    I have the following int[] in my database: '{0}' '{0,0}' '{0,0,0}' '{0,0,0,0}' This column is used to sort my tree data. Now when a parent updates it's order the children should also update. For example if the second record updates it's order to 1 it should result in the following. '{0}' '{0,1}' '{0,1,0}' '{0,1,0,0}' So I was wondering what the query would be to update record 3 and 4. In case it's not clear what I am asking leave a comment I can add additional information. Screenshot of my actual data:

    Read the article

  • postgres too slow

    - by Killercode
    Hi, I'm doing massive tests on a Postgres database... so basically I have 2 table where I inserted 40.000.000 records on, let's say table1 and 80.000.000 on table2 after this I deleted all those records. Now if I do SELECT * FROM table1 it takes 199000ms ? I can't understand what's happening? can anyone help me on this?

    Read the article

  • Default tablespace for indexes in postgres

    - by tom
    Just wondering if its possible to set a default tablespace in postgres to keep indexes. Would like the databases to live on the default tablespace for postgres, however, would like to get the indexes on a different set of disks just to keep the i/o traffic separated. It does not appear to me that it can be done without going in and doing an ALTER index TABLESPACE command, and then the index is moved and will stay there, but the databases and indexes are part of a django app, so non-django intervention can cause some problems.

    Read the article

  • Left outer joins that don't return all the rows from T1

    - by Summer
    Left outer joins should return at least one row from the T1 table if it matches the conditions. But what if the left outer join performs a join successfully, then finds that another criterion is not satisfied? Is there a way to get the query to return a row with T1 values and T2 values set to NULL? Here's the specific query, in which I'm trying to return a list of candidates, and the user's support for those candidates IF such support exists. SELECT c.id, c.name, s.support FROM candidates c LEFT JOIN support s on s.candidate_id = c.id WHERE c.office_id = 5059 AND c.election_id = 92 AND (s.user_id = 2 OR s.user_id IS NULL) --This line seems like the problem ORDER BY c.last_name, c.name The query joins the candidates and support table, but finds that it's a different user who supported this candidate (user_id=3, say). Then the candidate disappears entirely from the result set.

    Read the article

  • What is a good cms that is postgres compatible, open source and either php or python based?

    - by hackg
    Php or python Use and connect to our existing postgres databases open source / or very low license fees Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus. Already have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • Porting from MS Access

    - by lacqui
    I've recently been given a MS Access .mdb database file and asked to make it usable in a Linux system. What I'm looking for is a way to convert the Access database to an open-source database such as MySQL or PostGres. I don't have MS Office, and it's a one-time project for a volunteer organization so I don't want to spend money if it's avoidable. I'm running Vista x64, and have a Linux virtualbox, so something usable in either one will be good.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >