Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 228/643 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • value of type 'string' cannot be converted to 'Devart.data.postgresql.PgSqlParameter'

    - by hector
    The following is my PostgreSQL table structure and the vb.net code to insert into the tables.Using Devart's Component For PostgreSQL Connect table gtab83 CREATE TABLE gtab83 ( orderid integer NOT NULL DEFAULT nextval('seq_gtab83_id'::regclass), acid integer, slno integer, orderdte date ) table gtab84 CREATE TABLE gtab84 ( orderdetid integer DEFAULT nextval('seq_gtab84_id'::regclass), productid integer, qty integer, orderid integer ) Code to insert into the above tables is below '1.)INSERT INTO gtab83(orderid,acid, slno, orderdte) VALUES (?, ?, ?); '2.)INSERT INTO gtab84(orderdetid,productid, qty, orderid) VALUES (?, ?, ?); Try Dim cmd As PgSqlCommand = New PgSqlCommand("", Myconnstr) cmd.CommandText = _ "INSERT INTO GTAB83(ACID,SLNO,ORDERDTE)" & _ "VALUES " & _ "(@acid,@slno,@orderdte);" Dim paramAcid As PgSqlParameter = New PgSqlParameter("@acid", PgSqlType.Int, 0) Dim paramSlno As PgSqlParameter = New PgSqlParameter("@slno", PgSqlType.Int, 0) Dim paramOrderdte As PgSqlParameter = New PgSqlParameter("@orderdte", PgSqlType.Date, 0) paramAcid = cboCust.SelectedValue paramSlno = txtOrderNO.Text #ERROR# paramOrderdte = (txtDate.Text, "yyyy-MM-dd") #ERROR# Catch ex As Exception End Try ERROR : value of type 'string' cannot be converted to 'Devart.data.postgresql.PgSqlParameter'

    Read the article

  • How can i return dataset perfectly from sql?

    - by Phsika
    i try to write a winform application: i dislike below codes: DataTable dt = new DataTable(); dt.Load(dr); ds = new DataSet(); ds.Tables.Add(dt); Above part of codes looks unsufficient.How can i best loading dataset? public class LoadDataset { public DataSet GetAllData(string sp) { return LoadSQL(sp); } private DataSet LoadSQL(string sp) { SqlConnection con = new SqlConnection(ConfigurationSettings.AppSettings["ConnectionString"].ToString()); SqlCommand cmd = new SqlCommand(sp, con); DataSet ds; try { con.Open(); cmd.CommandType = CommandType.StoredProcedure; SqlDataReader dr = cmd.ExecuteReader(); DataTable dt = new DataTable(); dt.Load(dr); ds = new DataSet(); ds.Tables.Add(dt); return ds; } finally { con.Dispose(); cmd.Dispose(); } } }

    Read the article

  • Printing JTables without formatting of the original component

    - by EricR
    I'm writing an application which utilises tables which can be printed if the user so desires and I wish to print a JTable filled with data, except I haven't been able to find an option to remove the formatting; the printed tables looks like it does in the GUI (based on the system theme) which is making the table less readable and using excess ink. I wish to print the same data with clear formatting. Is there a way to do this straight from a JTable or is my best option simply to print to a file and have the use printer from there. Currently it functions through a viewer which gives the user some options for printing, and then it goes to the system's printer.

    Read the article

  • Windows 7 - 64 Computer shuts off unexpectedly

    - by C. Ross
    I have a home built Windows 7 x64 computer with a Core 2 Quad CPU. It has recently taken to suddenly shutting down/turning off at unusual intervals. It seems to be most common when playing media. I have tried running SpeedFan, and the CPU temp seems to hover around 49C. The computer has been running fine for over 1 years. Could heat be my problem, and how should I address it?

    Read the article

  • To join or not to join - database structure

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • SQL trigger for audit table question

    - by mattgcon
    I am writing a trigger to audit updates and deletes in tables. I am using SQL Server 2008 My questions are, Is there a way to find out what action is being taken on a record without going through the selection phase of the deleted and inserted tables? Another question is, if the record is being deleted, how do I record within the audit table the user that is performing the delete. (NOTE: the user connected to the database is a general connection string with a set user, I need the user who is logged into either a web app or a windows app) Please help?

    Read the article

  • Nested joins hide table names

    - by Sergio
    Hi: I have three tables: Suppliers, Parts and Types. I need to join all of them while discriminating columns with the same name (say, "id") in the three tables. I would like to successfully run this query: CREATE VIEW Everything AS SELECT Suppliers.name as supplier, Parts.id, Parts.description, Types.typedesc as type FROM Suppliers JOIN (Parts JOIN Types ON Parts.type_id = Types.id) ON Suppliers.id = Parts.supplier_id; My DBMS (sqlite) complains that "there is not such a column (Parts.id)". I guess it forgets table names once the JOIN is done but then how can I refer to the column id that belongs to the table Parts?

    Read the article

  • Set Pivot Items from Cell Range? Excel 2007

    - by Ben
    I have developed code which identifies the multiple item selections from a Pivot field and writes the list of items to a table. I then wish to take this contents of the list and use it to populate a number of other Pivot Tables on other tabs. I currently have the ability to do this for single Pivot items selections, but I need to do this for multiple selections. If I select multiple items in the Pivot Table and attempt to pass these selections to the other Pivot Tables it creates an error because it sees only hte text "Multiple Items" instead of a list of each item that was checked in the upstream Pivot field. I need some VBA code that allows me to use the list to set another Pivot Field's selections. All the Pivot fields in question here are page fields. I am using Excel 2007. Any help is appreciated. Thanks!

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Create a view like Tweetie's User Profile view

    - by Graeme
    Hi, I'm just wondering if anyone has any idea on how you can create a view that looks like the user profile view in apps like Tweetie, where there are seemingly multiple tables with a couple of normal (straight up and down tables) and then two rows of six cells, which in Tweeties case have the number of followers, following etc. I'm trying to make a similar view for my app, but can't seem to find out the best way to create it. Any tutorials, advice etc. would be appreciated. Thanks. P.S. Here's a picture of the view which I'm trying to recreate.

    Read the article

  • InnoDB or MyISAM - Why not both?

    - by Skoder
    Hey. I'm new to databases, and I've read various threads about which is better between InnoDB and MyISAM. It seems that the debates are to use or the other. Is it not possible to use both, depending on the table? What would be the disadvantages in doing this? As far as I can tell, the engine can be set during the CREATE TABLE command. Therefore, certain tables which are often read can be set to MyISAM, but tables that need transaction support can use InnoDB. I'm sure there must be a problem, otherwise this would be the ultimate answer :).

    Read the article

  • tricky SQL when joining

    - by Erik
    I've two tables, shows and objects. I want to print out the latest objects, and the shownames for them. Right now I'm doing it this way: SELECT MAX(objects.id) as max_id, shows.name, shows.id FROM shows, objects WHERE shows.id = objects.showId GROUP BY shows.name however, if I also want to fetch the episode of the object I can't put it like SELECT object.episode [...], because then wont automatically select the object which is MAX(objects.id), so my question is how to do that? If you haven't already figured out my tables they're like this: Shows id name and also: Objects id name episode season showId Using MySQL. Thanks!

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • jquery dropdownbox auto populated with mysql data

    - by Xin
    I've got 2 dropdownboxes. Dropdownbox 1 shows all the tables from a database. When dropdownbox 1 is selected; dropdownbox 2 will be populated with tablefields from the selected table. dropdownbox 1: This dropdown is populated with the following mysql query: "show tables from testdb" dropdownbox 2: This dropdown will auto populate when dropdownbox 1 is selected. Dropdownbox 2 will be populate with the following query: "describe tablename" //tablename selected from dropdownbox 1 I want to make this using jquery. Can anyone point me into the right direction?

    Read the article

  • Problem with auto increment primary key (MySQL).

    - by mathon12
    I have 2 tables each using other's primary key as a foreign key. The primary keys for both are set to auto_increment. The problem is, when I try to create and entry into one of the tables, I have no idea what the primary key of the entry is and can't figure out what to put in the other table as a foreign key. What should I do? Do I drop auto_increment altogether and cook up a unique identifier for each entry so I can use it to address the created entries? I'm using PHP, if that's relevant. Thanks.

    Read the article

  • How to implement multi relationship in SQL Server?

    - by Ethan
    I’m trying to design a database to use with ASP.net MVC application. Here is the scenario: There are three entities and users can post their comments for each of these different entities. I just wonder how just put one table for Comments and link all other entities to it. Obviously, Comments table needs 3 references (foreign key) to those tables but as you know these foreign keys can’t be null and just one of them can be filled for each row. Is there any better way than implementing three different tables for each entity’s comments?

    Read the article

  • Oracle/c#: How do i use bind variables with select statements to return multiple records?

    - by twiga
    I have a question regarding Oracle bind variables and select statements. What I would like to achieve is do a select on a number different values for the primary key. I would like to pass these values via an array using bind values. select * from tb_customers where cust_id = :1 int[] cust_id = { 11, 23, 31, 44 , 51 }; I then bind a DataReader to get the values into a table. The problem is that the resulting table only contains a single record (for cust_id=51). Thus it seems that each statement is executed independently (as it should), but I would like the results to be available as a collective (single table). A workaround is to create a temporary table, insert all the values of cust_id and then do a join against tb_customers. The problem with this approach is that I would require temporary tables for every different type of primary key, as I would like to use this against a number of tables (some even have combined primary keys). Is there anything I am missing?

    Read the article

  • MS SQL Server 2008 Stored Procedure Result as Column Default Value

    - by user337501
    First of all, thank you guys. You always know how to direct me when I cant even find the words to explain what the heck im trying to do. The default values of the columns on a couple of my tables need to equal the result of some complicated calculations on other columns in other tables. My first thought is to simply have the column default value equal the result of a stored procedure. I would also have one or more of the parameters pulled from the columns in the calling table. I don't know the syntax of how to do it though, and any time the word "stored" and "procedure" land next to each other in google I'm flooded with info on Parameter default values and nothing relating to what I actually want. Half of that was more of a vent than a question...any ideas though? And plz plz dont say "Well, you could use an On-Insert Trigger to..."

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • How to retrieve MYSQL records as an INSERT statement.

    - by Aglystas
    I'm trying come up with the best method of synchronizing particular rows of 2 different database tables. So, for example there's 2 product tables in different databases as such... Origin Database product{ merchant_id, product_id, ... additional fields } Destination Database product{ merchant_id product_id ... additional fields } So, the database schema is the same for both. However I'm looking to select records with a particular merchant_id, remove all records from the destination table that have that merchant_id and replace those records with records from the origin database of the same merchant_id. My first thought was using mysqldump, parsing out the create table statements, and only running the Insert Statements. Seems like a pain though. So I was wondering if there is a better technique to do this. I would think mysql has some method of creating INSERT statements as output from a SELECT statement, so you can define how to insert specific record information into a new db. Any help would be appreciated, thank you much.

    Read the article

  • Anyone have a good solution for scraping the HTML source of a page with content (in this case, HTML

    - by phpwns
    Anyone have a good solution for scraping the HTML source of a page with content (in this case, HTML tables) generated with Javascript? An embarrassingly simple, though workable solution using Crowbar: <?php function get_html($url) // $url must be urlencode(d) { $context = stream_context_create(array( 'http' => array('timeout' => 120) // HTTP timeout in seconds )); $html = substr(file_get_contents('http://127.0.0.1:10000/?url=' . $url . '&delay=3000&view=browser', 0, $context), 730, -32); // substr removes HTML from the Crowbar web service, returning only the $url HTML return $html; } ?> The advantage to using Crowbar is that the tables will be rendered (and accessible) thanks to the headless mozilla-based browser. The problem, of course, is being dependent on on an external web service, especially given that SIMILE seems to undergo regular server maintenance. :( A pure php solution would be nice, but any functional (and reliable) alternatives would be great.

    Read the article

  • Looking for best practise for writing a serial device communication app in C#

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.serialport, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connectio to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data recieved, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the gloabl variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flakey.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the datarecieved method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe readline will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • Rails with DB2 and multiple schemas

    - by GNUMatrix
    I have a 'legacy' DB2 database that has many other applications and users. Trying to experiment with a rails app. Got everything working great with the ibm_db driver. Problem is that I have some tables like schema1.products, schema1.sales and other tables like schema2.employees and schema2.payroll. In the ibm_db adapter connection, I specify a schema, like schema1 or schema2, and I can work within that one schema, but I need to be able to easily (and transparently) reference both schemas basically interchangeably. I don't want to break the other apps, and the SQL I would normally write against DB2 doesn't have any of these restrictions (schemas can be mixed in SQL against DB2 without any trouble at all). I would like to just specify table names as "schema1.products" for example and be done with it, but that doesn't seem to jive with the "rails way" of going about it. Suggestions?

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >