Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 197/619 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • How to Sum calulated fields

    - by Nazero Jerry
    I‘d like to ask I question that here that I think would be easy to some people. Ok I have query that return records of two related tables. (One to many) In this query I have about 3 to 4 calculated fields that are based on the fields from the 2 tables. Now I want to have a group by clause for names and sum clause to sum the calculated fields but it ends up in error message saying: “You tried to execute a query that is not part of aggregate function” So I decided to just run the query without the totals *(ie no group by , sum etc,,,) : And then I created another query that totals my previous query. ( i.e. using group by clause for names and sum for calculated fields… no calculation here) This is fine ( I use to do this) but I don’t like having two queries just to get summary total. Is their any other way of doing this in the design view and create only one query?. I would very much appreciate. Thankyou: JM

    Read the article

  • Matching process , issue with query

    - by Blerta Blerta
    i have this code which helps me match two different tables.. now, each of this tables, has a epos_id and a rbpos_id ! I have another table which has pairs of rbpos_id and epos_id, something like: id | epos_id | rbpos_id 1 a3566 465jd 2 hkiyb rbposi When i join this other table, i need to check this condition, i mean, the matching should be done, only and if, the epos_id and rbpos_id of the join i'm doing, have the same id,i mean, belong to the same row.. Here is my current query... Thanks! SELECT retailer.date, retailer.time, retailer.location, retailer.user_id,imovo.mobile_number ". "FROM retailer LEFT JOIN imovo ". " ON addtime(retailer.time, '0:0:50')>imovo.time AND retailer.time <imovo.time AND retailer.date=imovo.date

    Read the article

  • Write vim file as super-user ?

    - by zimbatm
    This is a usability problem that happens often to me : I open a read-only system file with vim, even editing it, because I'm not attentive enough, or because the vim on the system is badly configured. Once my changes are done, I'm stuck having to write them in a temporary file or loosing them, because :w! won't work. Is there a vim command (:W!!!) that allows you to write the current buffer as a super-user ? (Vim would ask for your sudo or su password naturally)

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • Reset auto increment column value in script mysql

    - by Lucia
    Hi, I have two mysql tables, one needs to start its auto-increment column id with the last value of the last inserted row in the other table (plus 1). According to mysql manual you can restart the value of an auto increment column like this: mysql ALTER TABLE tbl AUTO_INCREMENT = 100; However, this is not possible: mysql ALTER TABLE tb2 AUTO_INCREMENT = (SELECT MAX(id) FROM tbl1); I need to perform something like this because I'm filling the tables using a script. Is there another way to achieve it?

    Read the article

  • value of type 'string' cannot be converted to 'Devart.data.postgresql.PgSqlParameter'

    - by hector
    The following is my PostgreSQL table structure and the vb.net code to insert into the tables.Using Devart's Component For PostgreSQL Connect table gtab83 CREATE TABLE gtab83 ( orderid integer NOT NULL DEFAULT nextval('seq_gtab83_id'::regclass), acid integer, slno integer, orderdte date ) table gtab84 CREATE TABLE gtab84 ( orderdetid integer DEFAULT nextval('seq_gtab84_id'::regclass), productid integer, qty integer, orderid integer ) Code to insert into the above tables is below '1.)INSERT INTO gtab83(orderid,acid, slno, orderdte) VALUES (?, ?, ?); '2.)INSERT INTO gtab84(orderdetid,productid, qty, orderid) VALUES (?, ?, ?); Try Dim cmd As PgSqlCommand = New PgSqlCommand("", Myconnstr) cmd.CommandText = _ "INSERT INTO GTAB83(ACID,SLNO,ORDERDTE)" & _ "VALUES " & _ "(@acid,@slno,@orderdte);" Dim paramAcid As PgSqlParameter = New PgSqlParameter("@acid", PgSqlType.Int, 0) Dim paramSlno As PgSqlParameter = New PgSqlParameter("@slno", PgSqlType.Int, 0) Dim paramOrderdte As PgSqlParameter = New PgSqlParameter("@orderdte", PgSqlType.Date, 0) paramAcid = cboCust.SelectedValue paramSlno = txtOrderNO.Text #ERROR# paramOrderdte = (txtDate.Text, "yyyy-MM-dd") #ERROR# Catch ex As Exception End Try ERROR : value of type 'string' cannot be converted to 'Devart.data.postgresql.PgSqlParameter'

    Read the article

  • Where are the really high quality and complex Swing components?

    - by jouhni
    Looking at Swing, I have the feeling that it comes with many useful and reasonable atomic components in its core. And when I look at the Web there are hundrets of quickly plugged together components (among them many date/time pickers, pimped lists and tables), which have in common that I could easily write them on my own, if I needed them. When I build big software and come to the point where I need a domain-specific component which is really big, I mostly come to the point where I have to write it on my own, which, due to the point that they are not just plugged together lists and tables, isn't done qickly. So, the question is, why are there no Swing component galleries which contain more than just customized date/time pickers or lists with added tree support. Where are the components which really raise the level of abstraction, or are in best case domain-specific?

    Read the article

  • How can i return dataset perfectly from sql?

    - by Phsika
    i try to write a winform application: i dislike below codes: DataTable dt = new DataTable(); dt.Load(dr); ds = new DataSet(); ds.Tables.Add(dt); Above part of codes looks unsufficient.How can i best loading dataset? public class LoadDataset { public DataSet GetAllData(string sp) { return LoadSQL(sp); } private DataSet LoadSQL(string sp) { SqlConnection con = new SqlConnection(ConfigurationSettings.AppSettings["ConnectionString"].ToString()); SqlCommand cmd = new SqlCommand(sp, con); DataSet ds; try { con.Open(); cmd.CommandType = CommandType.StoredProcedure; SqlDataReader dr = cmd.ExecuteReader(); DataTable dt = new DataTable(); dt.Load(dr); ds = new DataSet(); ds.Tables.Add(dt); return ds; } finally { con.Dispose(); cmd.Dispose(); } } }

    Read the article

  • Printing JTables without formatting of the original component

    - by EricR
    I'm writing an application which utilises tables which can be printed if the user so desires and I wish to print a JTable filled with data, except I haven't been able to find an option to remove the formatting; the printed tables looks like it does in the GUI (based on the system theme) which is making the table less readable and using excess ink. I wish to print the same data with clear formatting. Is there a way to do this straight from a JTable or is my best option simply to print to a file and have the use printer from there. Currently it functions through a viewer which gives the user some options for printing, and then it goes to the system's printer.

    Read the article

  • SQL trigger for audit table question

    - by mattgcon
    I am writing a trigger to audit updates and deletes in tables. I am using SQL Server 2008 My questions are, Is there a way to find out what action is being taken on a record without going through the selection phase of the deleted and inserted tables? Another question is, if the record is being deleted, how do I record within the audit table the user that is performing the delete. (NOTE: the user connected to the database is a general connection string with a set user, I need the user who is logged into either a web app or a windows app) Please help?

    Read the article

  • Nested joins hide table names

    - by Sergio
    Hi: I have three tables: Suppliers, Parts and Types. I need to join all of them while discriminating columns with the same name (say, "id") in the three tables. I would like to successfully run this query: CREATE VIEW Everything AS SELECT Suppliers.name as supplier, Parts.id, Parts.description, Types.typedesc as type FROM Suppliers JOIN (Parts JOIN Types ON Parts.type_id = Types.id) ON Suppliers.id = Parts.supplier_id; My DBMS (sqlite) complains that "there is not such a column (Parts.id)". I guess it forgets table names once the JOIN is done but then how can I refer to the column id that belongs to the table Parts?

    Read the article

  • To join or not to join - database structure

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Set Pivot Items from Cell Range? Excel 2007

    - by Ben
    I have developed code which identifies the multiple item selections from a Pivot field and writes the list of items to a table. I then wish to take this contents of the list and use it to populate a number of other Pivot Tables on other tabs. I currently have the ability to do this for single Pivot items selections, but I need to do this for multiple selections. If I select multiple items in the Pivot Table and attempt to pass these selections to the other Pivot Tables it creates an error because it sees only hte text "Multiple Items" instead of a list of each item that was checked in the upstream Pivot field. I need some VBA code that allows me to use the list to set another Pivot Field's selections. All the Pivot fields in question here are page fields. I am using Excel 2007. Any help is appreciated. Thanks!

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Create a view like Tweetie's User Profile view

    - by Graeme
    Hi, I'm just wondering if anyone has any idea on how you can create a view that looks like the user profile view in apps like Tweetie, where there are seemingly multiple tables with a couple of normal (straight up and down tables) and then two rows of six cells, which in Tweeties case have the number of followers, following etc. I'm trying to make a similar view for my app, but can't seem to find out the best way to create it. Any tutorials, advice etc. would be appreciated. Thanks. P.S. Here's a picture of the view which I'm trying to recreate.

    Read the article

  • InnoDB or MyISAM - Why not both?

    - by Skoder
    Hey. I'm new to databases, and I've read various threads about which is better between InnoDB and MyISAM. It seems that the debates are to use or the other. Is it not possible to use both, depending on the table? What would be the disadvantages in doing this? As far as I can tell, the engine can be set during the CREATE TABLE command. Therefore, certain tables which are often read can be set to MyISAM, but tables that need transaction support can use InnoDB. I'm sure there must be a problem, otherwise this would be the ultimate answer :).

    Read the article

  • tricky SQL when joining

    - by Erik
    I've two tables, shows and objects. I want to print out the latest objects, and the shownames for them. Right now I'm doing it this way: SELECT MAX(objects.id) as max_id, shows.name, shows.id FROM shows, objects WHERE shows.id = objects.showId GROUP BY shows.name however, if I also want to fetch the episode of the object I can't put it like SELECT object.episode [...], because then wont automatically select the object which is MAX(objects.id), so my question is how to do that? If you haven't already figured out my tables they're like this: Shows id name and also: Objects id name episode season showId Using MySQL. Thanks!

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • jquery dropdownbox auto populated with mysql data

    - by Xin
    I've got 2 dropdownboxes. Dropdownbox 1 shows all the tables from a database. When dropdownbox 1 is selected; dropdownbox 2 will be populated with tablefields from the selected table. dropdownbox 1: This dropdown is populated with the following mysql query: "show tables from testdb" dropdownbox 2: This dropdown will auto populate when dropdownbox 1 is selected. Dropdownbox 2 will be populate with the following query: "describe tablename" //tablename selected from dropdownbox 1 I want to make this using jquery. Can anyone point me into the right direction?

    Read the article

  • Problem with auto increment primary key (MySQL).

    - by mathon12
    I have 2 tables each using other's primary key as a foreign key. The primary keys for both are set to auto_increment. The problem is, when I try to create and entry into one of the tables, I have no idea what the primary key of the entry is and can't figure out what to put in the other table as a foreign key. What should I do? Do I drop auto_increment altogether and cook up a unique identifier for each entry so I can use it to address the created entries? I'm using PHP, if that's relevant. Thanks.

    Read the article

  • Windows 64bit Sandboxing software alternatives

    - by Pacifika
    As you might know sandboxing software doesn't work in 64bit Windows due to patchguard. What are the alternatives for a person looking to test untrusted / temporary software? Edit: @Nick I'd prefer an alternative to VMs as I'm not happy with the extended startup time, the extra login sequences and the memory overhead that accompanies booting a VM solution to test something out ocassionally as a home user. Also it's another system that needs to be kept secure and up to date.

    Read the article

  • How to implement multi relationship in SQL Server?

    - by Ethan
    I’m trying to design a database to use with ASP.net MVC application. Here is the scenario: There are three entities and users can post their comments for each of these different entities. I just wonder how just put one table for Comments and link all other entities to it. Obviously, Comments table needs 3 references (foreign key) to those tables but as you know these foreign keys can’t be null and just one of them can be filled for each row. Is there any better way than implementing three different tables for each entity’s comments?

    Read the article

  • MS SQL Server 2008 Stored Procedure Result as Column Default Value

    - by user337501
    First of all, thank you guys. You always know how to direct me when I cant even find the words to explain what the heck im trying to do. The default values of the columns on a couple of my tables need to equal the result of some complicated calculations on other columns in other tables. My first thought is to simply have the column default value equal the result of a stored procedure. I would also have one or more of the parameters pulled from the columns in the calling table. I don't know the syntax of how to do it though, and any time the word "stored" and "procedure" land next to each other in google I'm flooded with info on Parameter default values and nothing relating to what I actually want. Half of that was more of a vent than a question...any ideas though? And plz plz dont say "Well, you could use an On-Insert Trigger to..."

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >