Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 216/1219 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Querying and ordering results of a database in grails using transient fields

    - by Azder
    I'm trying to display paged data out of a grails domain object. For example: I have a domain object Employee with the properties firstName and lastName which are transient, and when invoking their setter/getter methods they encrypt/decrypt the data. The data is saved in the database in encrypted binary format, thus not sortable by those fields. And yet again, not sortable by transient ones either, as noted in: http://www.grails.org/GSP+Tag+-+sortableColumn . So now I'm trying to find a way to use the transients in a way similar to: Employee.withCriteria( max: 10, offset: 30 ){ order 'lastName', 'asc' order 'firstName', 'asc' } The class is: class Employee { byte[] encryptedFirstName byte[] encryptedLastName static transients = [ 'firstName', 'lastName' ] String getFirstName(){ decrypt("encryptedFirstName") } void setFirstName(String item){ encrypt("encryptedFirstName",item) } String getLastName(){ decrypt("encryptedLastName") } void setLastName(String item){ encrypt("encryptedLastName",item) } }

    Read the article

  • acts_as_solr returns all rows in the database when using the model as search query

    - by chris Chan
    In our application we're using acts_as_solr for search. Everything seems to be running smoothly except for the fact that using the model name as the search query returns every single row in the table. For example, let's say we have a users table. We specify acts_as_solr in our model to search the fields first name, last name and handle acts_as_solr :fields = [:handle, :lname, :fname]. When you use "user" as the search term it returns every single user in the system, or every row in the database as a result. Has anyone else run into this?

    Read the article

  • Query to return internal details about stored function in MS-SQL database

    - by Anthony
    I have been given access to a ms-sql database that is currently used by 3rd party app. As such, I don't have any documentation on how that application stores the data or how it retrieves it. I can figure a few things out based on the names of various tables and the parameters that the user-defined functions takes and returns, but I'm still getting errors at every other turn. I was thinking that it would be really helpful if I could see what the stored functions were doing with the parameters given to return the output. Right now all I've been able to figure out is how to query for the input parameters and the output columns. Is there any built-in information_schema table that will expose what the function is doing between input and output?

    Read the article

  • Prepare and import data into existing database

    - by Álvaro G. Vicario
    I maintain a PHP application with SQL Server backend. The DB structure is roughly this: lot === lot_id (pk, identify) lot_code building ======== buildin_id (pk, identity) lot_id (fk) inspection ========== inspection_id (pk, identify) building_id (fk) date inspector result The database already has lots and buildings and I need to import some inspections. Key points are: It's a one-time initial load. Data comes in an Excel file. The Excel data is unaware of DB autogenerated IDs: inspections must be linked to buildings through their lot_code What are my options to do such data load? date inspector result lot_code ========== =========== ======== ======== 31/12/2009 John Smith Pass 987654X 28/02/2010 Bill Jones Fail 123456B

    Read the article

  • How to add space between the images being fetched using database through php

    - by ParveenArora
    I am using following code to fetch images using database with php. while($row = mysql_fetch_array($result)) //To excute result query { echo "<a href='http://".$row['website']."' target='_blank'><img src=\"" . $PathImage . $row['logo'] . "\" height = $FooterWidth /></a>XX; } Here I am using $row[logo] is fetching the path of images stored on the server and XX to put the spaced between the images having the same color of text XX as background, and but I want to use the proper method I know this can be done using table but I want to do it without using table. Any Suggestions?

    Read the article

  • Storing cookielib cookies in a database

    - by Mridang Agarwalla
    Hi, I'm using the cookielib module to handle HTTP cookies when using the urllib2 module in Python 2.6 in a way similar to this snippet: import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") I'd like to store the cookies in a database. I don't know whats better - serialize the CookieJar object and store it or extract the cookies from the CookieJar and store that. I don't know which one's better or how to implement either of them. I should be also be able to recreate the CookieJar object. Could someone help me out with the above? Thanks in advance.

    Read the article

  • Wordpress: Sort posts by meta value after AFTER querying from database

    - by Joseph Carrington
    Hello, I am pulling posts from my database by using Wordpress' WP_Query object like so: $shows_query = new WP_Query("category_name=shows&meta_key=band&meta_value=$artist_id"); I have another meta value I would like to sort the posts by, however. The meta key is 'date'. The WP_Query object can no work with multiple meta_keys, so this does not work: $shows_query = new WP_Query("category_name=shows&meta_key=band&meta_value=$artist_id&meta_key=date&orderby=meta_value&order=DESC"); So now I have to figure out a way to sort the posts in $shows_query['posts'] by one of their meta_values, which are not even IN their array. Any other, more sensible approach would also be appreciated.

    Read the article

  • Parsing a CSV File to a Rails Database

    - by Schroedinger
    G'day guys, I'm using fasterCSV and a rake script to parse a csv with about 30 columns into my rails db for a 'Trade' item. The script works fine when all of the values are set to strings, but when I change it to a decimal, int or other value, everything goes to hell. Wondering if fasterCSV has built in int etc parsing or whether I'll have to manage these within my model. Basically, I'm given a giant amount of trades data, need to import it, and then need to provide feedback with say the average trade volume, the times, etc. I understand I can do that all with the wonderful records provided to me by activeRecord but wondered if there was an easier way to populate a rather large Database with a given CSV? Several of the fields don't have values for certain rows, fasterCSV seems to work perfectly when they're all strings, but not when I try to get decimal or other.

    Read the article

  • "No database selected" even when db clearly selected

    - by Someone
    One of my webpages gets a recurring error: "No database selected", even though the DB is selected. Right about now it's a 50-50 chance whether the page will load just fine, or whether I receive this error. After one or two reloads, the page works again. I am including the exact same connection file on my other pages, and I don't have this problem. What could be the cause of this? I'm using ensim pro for webhosting. TIA.

    Read the article

  • how do i connect to MSSQL 2008 database in JAVA with jbbc

    - by shuxer
    Hello I have MSSQL 2008 installed on my local PC, and my java application needs to connect to a mssql database. I am a new to MSSQL and i would like get some help on creating user login for my java application and getting connection via jdbc. So far i tried to create a user login for my app and used following connection string, but i doest work at all. Any help and hint will be appreciated. jdbc:jtds:sqlserver://127.0.0.1:1433/dotcms username="shuxer" password="itarator" Thanks in advance.

    Read the article

  • Is there such a thing as too many tables?

    - by Stacey
    I've been searching stackoverflow for about an hour now and couldn't find any topics related, so I apologize if this is a duplicate question. My inquery is this. Is there a point at which there are too many tables in a database? Even if the structure is well organized, thought out, and perfectly facilitates the design intent? I have a database that is quickly approaching 40 tables - about 10 main ones, and over 30 ancillary tables (junction tables, 'enumeration' tables, etc). Am I just a bad developer - or should I be trying something different? It seems like so many to me, I'm really afraid at how it will impact the performance of the project. I have done a lot of condensing where possible, grouped similar things where possible, etc. The database is built in MS-SQL 2008.

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • MySQL or SQL Server

    - by user203708
    I'm creating an application that I want to run on either MySQL or SQL Server (not both at the same time) I've created two PHP classes DatabaseMySQL and DatabaseSQLSVR and I'd like my application to know which database class to use based on a constant set up at install. define(DB_TYPE, "mysql"); // or "sqlsrv" I'm trying to think of the best way to handle this. My thought is to do an "if else" wherever I instantiate the database: $db = (DB_TYPE == "mysql") ? new DatabaseMySQL : new DatabaseSQLSVR; I know there has to be a better way of doing this though. Suppose I want to add a third database type later; I'll have to go and redo all my code. Yuk!! Any help would be much appreciated. Thank

    Read the article

  • error in mysql syntax in vb.net

    - by user225269
    I get this error, while I'm testing the code below: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[student](ID, LASTNAME, FIRSTNAME, SCHOOL) VALUES ('333', 'aaa', 'aaa', 'aaa')' at line 1 I just recycled the code that I used in manipulating ms sql database. So the syntax must be wrong. What might be the correct syntax for adding records into mysql database? Here is my current code: idnum = TextBox1.Text lname = TextBox2.Text fname = TextBox3.Text skul = TextBox4.Text Using sqlcon As New MySqlConnection("Server=localhost; Database=testing;Uid=root;Pwd=nitoryolai123$%^;") sqlcon.Open() Dim sqlcom As New MySqlCommand() sqlcom.Connection = sqlcon sqlcom.CommandText = "INSERT INTO [student](ID, LASTNAME, FIRSTNAME, SCHOOL) VALUES (@ParameterID, @ParameterLastName, @ParameterFirstName, @ParameterSchool)" sqlcom.Parameters.AddWithValue("@ParameterID", TextBox1.Text) sqlcom.Parameters.AddWithValue("@ParameterLastName", TextBox2.Text) sqlcom.Parameters.AddWithValue("@ParameterFirstName", TextBox3.Text) sqlcom.Parameters.AddWithValue("@ParameterSchool", TextBox4.Text) sqlcom.ExecuteNonQuery() End Using Please help, thanks

    Read the article

  • MySQL JDBC date issues with database server in different timezone

    - by Somatik
    I have a database server in "Europe/London" time zone and my web server in "Europe/Brussels". Since it is summer time now my application server has a 2 hour difference. I created a test to reproduce my issue: Query q = JPA.em().createNativeQuery("SELECT UNIX_TIMESTAMP(startDateTime) FROM `Event` WHERE `id` =574"); BigInteger unix = (BigInteger) q.getSingleResult(); System.out.println(unix + "000 UNIX_TIMESTAMP to BigInteger"); Query q2 = JPA.em().createNativeQuery("SELECT startDateTime FROM `Event` WHERE `id` =574"); Timestamp o = (Timestamp) q2.getSingleResult(); System.out.println(o.getTime() + " Timestamp"); The startDateTime column is defined as 'datetime' (but same issue with 'timestamp') The output I am getting is this: 1340291591000 UNIX_TIMESTAMP to BigInteger 1340284391000 Timestamp Reading java date objects results in a shift in time zone, how do I fix this? I would expect the jdbc driver to just set the "unix time" value it gets from the server in the Date object. (a proper solution should work with any timezone combination, not only for db in GMT)

    Read the article

  • How to control the memory size of continuously running windows service?

    - by Snowill
    Hi, I have created a windows service which is continuously polling a database. For this purpose i have a timer in place. Ever time i am querying a database table i open a connection and close it immediately after my work is done. Right now i am doing this every 20 seconds for testing purpose, but later this time might increase to 5 - 10 minutes. What happens is everytime the database table is polled there is an increase of 10-12 KB in the size of the memory of the service running. This i can see in the task manager. Is there any way to control this.

    Read the article

  • Are there any e-commerce websites that use NoSQL databases

    - by Saif Bechan
    I have read a lot lately about 'NoSQL' databases such as CouchDB, MongoDB etc. Most of the websites I have seen using this are mainly text based websites such as The New York Times and Source forge. I was wondering if you could apply this to websites where payment is a huge issue. I am thinking of the following issues: How well can you secure the data Do these system provide an easy backup/restore machanism How are transactions handled commit/rollback I have read the following articles that cover some aspects: Can I do transactions and locks in CouchDB? Pros/Cons of document based database vs relational database In these posts the aspect of transactions if covered. However the questions of security and backups is not covered. Can someone shed some light on this subject? And if possible, does anyone know of some e-commerce websites that have successfully implemented the document based database.

    Read the article

  • SQL Server: query database user roles for all databases in server

    - by atricapilla
    I would like to make a query for database user roles for all databases in my sql server instance. I modified a query from sp_helpuser: select u.name ,case when (r.principal_id is null) then 'public' else r.name end ,l.default_database_name ,u.default_schema_name ,u.principal_id from sys.database_principals u left join (sys.database_role_members m join sys.database_principals r on m.role_principal_id = r.principal_id) on m.member_principal_id = u.principal_id left join sys.server_principals l on u.sid = l.sid where u.type <> 'R' How can I modify this to query from all databases? What is the link between sys.databases and sys.database_principals?

    Read the article

  • INET_ATON() and INET_NTOA() in PHP?

    - by blerh
    I want to store IP addresses in my database, but I also need to use them throughout my application. I read about using INET_ATON() and INET_NTOA() in my MySQL queries to get a 32-bit unsigned integer out of an IP address, which is exactly what I want as it will make searching through the database faster than using char(15). The thing is, I can't find a function that does the same sort of thing in PHP. The only thing I came across is: http://php.net/manual/en/function.ip2long.php So I tested it: $ip = $_SERVER['REMOTE_ADDR']; echo ip2long($ip); And it outputs nothing. In the example they gave it seems to work, but then again I'm not exactly sure if ip2long() does the same thing as INET_ATON(). Does someone know a PHP function that will do this? Or even a completely new solution to storing an IP address in a database? Thanks.

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • are fixtures loaded when using the sql dump to create a test database

    - by Josh Moore
    Because of some non standard table creation options I am forced to use the sql dump instead of the standard schema.rb (i.e. I have uncommented this line in the environment.rb config.active_record.schema_format = :sql). I have noticed that when I use the sql dump that my fixtures do not seem to be loaded into the database. Some data is loaded into it but, I am not sure where it is coming from. Is this normal? and if it is normal can anybody tell me where this other data is coming from?

    Read the article

  • Send through Email, or store in database?

    - by user156814
    I have wondered when it is best to send an email, and when its best to store data in a database/log file. Everytime a user wants to contact me or inform me of soething, I suppose an email is best.. but is an email always preferred over other ways, and in what cases. Possible reasons for being contacted I can think of are questions, suggestions, feedback, reporting abuse, advertising, etc... I assume email, "why add unnecessary things to the DB?", but I figure data in DB would be a lot easier to manage. Whats the better/best way to be informed of things like this.. What is the best way for you (webmaster) to be informed of something by users? through email, or some other way

    Read the article

  • Database over 2GB in MongoDB

    - by configurator
    We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow? I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

    Read the article

  • How to insert an integer into a database through command prompt

    - by jpavlov
    I am trying to insert a integer into a database in C# using the code below, but everytime I run the compiler informs me that my integer is not a valid column "Invalid Column Name UserID" Does anyone have any insight on this? Thanks. Console.WriteLine("Please enter a new User Id"); string line = Console.ReadLine(); int UserID; if (int.TryParse(line, out UserID)) { Console.WriteLine(UserID); Console.ReadLine(); } //Prepare the command string string insertString = @"INSERT INTO tb_User(ID,f_Name, l_Name) VALUES (UserID,'Ted','Turner')";

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >