Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 459/2478 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Rails: Oracle constraint violation

    - by justinbach
    I'm doing maintenance work on a Rails site that I inherited; it's driven by an Oracle database, and I've got access to both development and production installations of the site (each with its own Oracle DB). I'm running into an Oracle error when trying to insert data on the production site, but not the dev site: ActiveRecord::StatementInvalid (OCIError: ORA-00001: unique constraint (DATABASE_NAME.PK_REGISTRATION_OWNERSHIP) violated: INSERT INTO registration_ownerships (updated_at, company_ownership_id, created_by, updated_by, registration_id, created_at) VALUES ('2006-05-04 16:30:47', 3, NULL, NULL, 2920, '2006-05-04 16:30:47')): /usr/local/lib/ruby/gems/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:221:in `execute' app/controllers/vendors_controller.rb:94:in `create' As far as I can tell (I'm using Navicat as an Oracle client), the DB schema for the dev site is identical to that of the live site. I'm not an Oracle expert; can anyone shed light on why I'd be getting the error in one installation and not the other? Incidentally, both dev and production registration_ownerships tables are populated with lots of data, including duplicate entries for country_ownership_id (driven by index PK_REGISTRATION_OWNERSHIP). Please let me know if you need more information to troubleshoot. I'm sorry I haven't given more already, but I just wasn't sure which details would be helpful.

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • MS Query Analizer/Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analizer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • ado.net Concurrency violation

    - by Bicubic
    My first time using ADO.net. Trying to make database of Users. First I populate my DataSet: adapter.AcceptChangesDuringFill = true; adapter.AcceptChangesDuringUpdate = true; adapter.Fill(dataset); To create a user: User user = new User(); user.datarow = dataset.Users.NewUsersRow(); user.Name = username; user.PasswordHash = GetHash(password); user.Rights = UserRights.None; users.Add(user); dataset.Users.AddUsersRow(user.datarow); adapter.Update(dataset); When a user property is modified: adapter.Update(dataset); Creation by itself is fine. If I take an existing user and make multiple changes, fine. Multiple creations in a row, fine. Creation followed by a property change, I get this: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Any ideas?

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • [Qt] How to get rid of OCI.dll dependency when compiling static

    - by STL
    Hi, My application accesses an Oracle database through Qt's QSqlDatabase class. I'm compiling Qt as static for the release build, but I can't seem to be able to get rid of OCI.dll dependency. I'm trying to link against oci.lib (as available in Oracle's Instant Client with SDK). Here's my configure line : configure -qt-libjpeg -qt-zlib -qt-libpng -nomake examples -nomake demos -no-exceptions -no-stl -no-rtti -no-qt3support -no-scripttools -no-openssl -no-opengl -no-phonon -no-style-motif -no-style-cde -no-style-cleanlooks -no-style-plastique -static -release -opensource -plugin-sql-oci -plugin-sql-sqlite -platform win32-msvc2005 I link against oci.h and oci.lib in the SDK's folder by using : set INCLUDE=C:\oracle\instantclient\sdk\include;%INCLUDE% set LIB=C:\oracle\instantclient\sdk\lib\msvc;%LIB% Then, once Qt is compiled, I use the following lines in my *.pro file : QT += sql CONFIG += static LIBS += C:\oracle\instantclient\sdk\lib\msvc\oci.lib QTPLUGIN += qsqloci Then, in my main.cpp, I add the following commands to statically compile OCI plugin in the application : #include <QtPlugin> Q_IMPORT_PLUGIN(qsqloci) After compiling the project, I test it on my workstation and it works (as I have Oracle Instant Client installed). When I try on another workstation, I always get the message: This application has failed to start because OCI.dll was not found. Re-installing this application may fix this problem. I don't understand why I still need OCI.dll, as my statically linked application is supposed to link to oci.lib instead. Is there any Qt people here that might have a solution for me ? Thanks a lot ! STL

    Read the article

  • Temporary users table or legitimate users table?

    - by John
    I have a freelance web application that lets users register for events. In my database, I have a t_events_applicants table with the column t_events_applications.user_id with a foreign key constraint linked to the t_users.user_id column. So this means only users who have registered with my web application can register for my web application's events. My client would now like to allow non-registered users, users who do not have an entry in my t_user table, to register for events. These non-registered users only need to provide their name and email address to register for events. Should I create a t_temporary_user table with columns name and email and then remove the t_events_applicants.user_id fk constraint? Or should I add un-registered users to the t_user table and then add a column called t_user.type where type can be 'registered' or 'non-registered'? How do I decide which approach to go with? A lot of times, I hesitate with either approach. I ask myself, "What if at a later time, a temporary user is allowed to become a fully registered user? Then maybe I should have only a t_user table. But then I also don't feel good about storing a lot of temporary users in t_user."

    Read the article

  • ob_start() is partially capturing data

    - by AAA
    I am using the following code: PHP: // Generate Guid function NewGuid() { $s = strtoupper(uniqid(rand(),true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); $alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'; function base_encode($num, $alphabet) { $base_count = strlen($alphabet); $encoded = ''; while ($num >= $base_count) { $div = $num/$base_count; $mod = ($num-($base_count*intval($div))); $encoded = $alphabet[$mod] . $encoded; $num = intval($div); } if ($num) $encoded = $alphabet[$num] . $encoded; return $encoded; } function base_decode($num, $alphabet) { $decoded = 0; $multi = 1; while (strlen($num) > 0) { $digit = $num[strlen($num)-1]; $decoded += $multi * strpos($alphabet, $digit); $multi = $multi * strlen($alphabet); $num = substr($num, 0, -1); } return $decoded; } ob_start(); echo base_encode($Guid, $alphabet); //should output: bUKpk $theid = ob_get_contents(); ob_get_clean(); The problem: When i echo $theid, it shows the complete entry, but as it is being inserted into the database, only the first entry in the sequence gets inserted, for example for the entry buKPK, only 'b' is being inserted not the rest.

    Read the article

  • Schema-less design guidelines for Google App Engine Datastore and other NoSQL DBs

    - by jamesaharvey
    Coming from a relational database background, as I'm sure many others are, I'm looking for some solid guidelines for setting up / designing my datastore on Google App Engine. Are there any good rules of thumb people have for setting up these kinds of schema-less data stores? I understand some of the basics such as denormalizing since you can't do joins, but I was wondering what other recommendations people had. The particular simple example I am working with concerns storing searches and their results. For example I have the following 2 models defined in my Google App Engine app using Python: class Search(db.Model): who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) class SearchResult(db.Model): title = db.StringProperty() content = db.StringProperty() who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) I'm duplicating a bunch of properties between the models for the sake of denormalization since I can't join Search and SearchResult together. Does this make sense? Or should I store a search ID in the SearchResult model and effectively 'join' the 2 models in code when I retrieve them from the datastore? Please keep in mind that this is a simple example. Both models will have a lot more properties and the way I'm approaching this right now, I would put any property I put in the Search model in the SearchResult model as well.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Fastest way to modify a decimal-keyed table in MySQL?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • Fastest way to convert a list of doubles to a unique list of integers?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Modeling complex hierarchies

    - by jdn
    To gain some experience, I am trying to make an expert system that can answer queries about the animal kingdom. However, I have run into a problem modeling the domain. I originally considered the animal kingdom hierarchy to be drawn like -animal -bird -carnivore -hawk -herbivore -bluejay -mammals -carnivores -herbivores This I figured would allow me to make queries easily like "give me all birds", but would be much more expensive to say "give me all carnivores", so I rewrote the hierarchy to look like: -animal -carnivore -birds -hawk -mammals -xyz -herbivores -birds -bluejay -mammals But now it will be much slower to query "give me all birds." This is of course a simple example, but it got me thinking that I don't really know how to model complex relationships that are not so strictly hierarchical in nature in the context of writing an expert system to answer queries as stated above. A directed, cyclic graph seems like it could mathematically solve the problem, but storing this in a relational database and maintaining it (updates) would seem like a nightmare to me. I would like to know how people typically model such things. Explanations or pointers to resources to read further would be acceptable and appreciated.

    Read the article

  • Primary key/foreign Key naming convention

    - by Jeremy
    In our dev group we have a raging debate regarding the naming convention for Primary and Foreign Keys. There's basically two schools of thought in our group: 1) Primary Table (Employee) Primary Key is called ID Foreign table (Event) Foreign key is called EmployeeID 2) Primary Table (Employee) Primary Key is called EmployeeID Foreign table (Event) Foreign key is called EmployeeID I prefer not to duplicate the name of the table in any of the columns (So I prefer option 1 above). Conceptually, it is consisted with a lot of the recommended practices in other languages, where you don't use the name of the object in its property names. I think that naming the foreign key EmployeeID (or Employee_ID might be better) tells the reader that it is the ID column of the Employee Table. Some others prefer option 2 where you name the primary key prefixed with the table name so that the column name is the same throughout the database. I see that point, but you now can not visually distinguish a primary key from a foreign key. Also, I think it's redundant to have the table name in the column name, because if you think of the table as an entity and a column as a property or attribute of that entity, you think of it as the ID attribute of the Employee, not the EmployeeID attribute of an employee. I don't go an ask my coworker what his PersonAge or PersonGender is. I ask him what his Age is. So like I said, it's a raging debate and we go on and on and on about it. I'm interested to get some new perspective.

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • insert data to table based on another table C#

    - by user1017315
    I wrote a code which takes some values from one table and inserts the other table in these values.(not just these values, but also these values(this values=values from the based on table)) and I get this error: System.Data.OleDb.OleDbException (0x80040E10): value wan't given for one or more of the required parameters.` here's the code. I don't know what i've missed. string selectedItem = comboBox1.SelectedItem.ToString(); Codons cdn = new Codons(selectedItem); string codon1; int index; if (this.i != this.counter) { //take from the DataBase the matching codonsCodon1 to codonsFullName codon1 = cdn.GetCodon1(); //take the serialnumber of the last protein string connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;" + "Data Source=C:\\Projects_2012\\Project_Noam\\Access\\myProject.accdb"; OleDbConnection conn = new OleDbConnection(connectionString); conn.Open(); string last= "SELECT proInfoSerialNum FROM tblProInfo WHERE proInfoScienceName = "+this.name ; OleDbCommand getSerial = new OleDbCommand(last, conn); OleDbDataReader dr = getSerial.ExecuteReader(); dr.Read(); index = dr.GetInt32(0); //add the amino acid to tblOrderAA using (OleDbConnection connection = new OleDbConnection(connectionString)) { string insertCommand = "INSERT INTO tblOrderAA(orderAASerialPro, orderAACodon1) " + " values (?, ?)"; using (OleDbCommand command = new OleDbCommand(insertCommand, connection)) { connection.Open(); command.Parameters.AddWithValue("orderAASerialPro", index); command.Parameters.AddWithValue("orderAACodon1", codon1); command.ExecuteNonQuery(); } } } EDIT:I put a messagebox after that line: index = dr.GetInt32(0); to see where is the problem, and i get the error before that.i don't see the messagebox

    Read the article

  • Synchronizing DataGridView (DataTable) with the DB

    - by Ezekiel Rage
    Hi! I have the following situation: there is a table in the DB that is queried when the form loads and the retrieved data is filled into a DataGridView via the DataTable underneath. After the data is loaded the user is free to modify the data (add / delete rows, modify entries). The form has 2 buttons: Apply and Refresh. The first one sends the changes to the database, the second one re-reads the data from the DB and erases any changes that have been made by the user. My question is: is this the best way to keep a DataGridView synchronized with the DB (from a users point of view)? For now these are the downsides: the user must keep track of what he is doing and must press the button every while the modifications are lost if the form is closed / app crash / ... I tried sending the changes to the DB on CellEndEdit event but then the user also needs some Undo/Redo functionality and that is ... well ... a different story. So, any suggestions?

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Which SQL statements to execute with intersection / junction tables

    - by user1455103
    Here a simplified database layout One condo can hold multiple properties (flats, garage boxes, etc) - 1->n relationship One owner can have multiple properties in the same condo and properties can have more than one owner (m->n changed to 1->n with the junction table) One condo can have multiple owners - 1->n Some additional clarification: A owner is a member of a condo. A condo is made of properties belonging to owners BUT a owner is not linked to a property directly (there can be no relation between a property and a owner for a certain time BUT there will ALWAYS be a relation between a owner and a condo). Reason for this: the agent managing the condo will first create a list of owners and a list of properties. It is only later thet he will "link" each property to one or multiple owners (or inversely) I'm quite new to SQL. What SQL statements should I execute to: SELECT, for a specific condo (WHERE condition), the properties and their respective owners (all properties should be listed even if owners are null) SELECT, for a specific condo (WHERE condition), the owners along with their properties (all owners should be listed even if properties are null) UPDATE / DELETE existing owners (I'm uncertain about how to handle the operation for the junction tables. Should I first check if there is an entry in the junction table or not ?) UPDATE / DELETE existing properties (same concern) INSERT new owners (should I use two different SQL statements depending if the owner should be linked to a property or NOT - IF condition ?) INSERT new properties (same question as above) Could you be as clear and generic as possible so that it can be reused ? :-)

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >