Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 573/643 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • how to create aro using dbAcl using console

    - by Praveen kalal
    hi i am using cakephp for my project but whille creating acl using command promt. when i run the following command cake schema run create DbAcl it genrate three tables in database. but after puting the following code in users_controller.php. and this command. cake acl view aro it dont create aros. function index() { $aro =& $this->Acl->Aro; //pr($aro); exit; //Here's all of our group info in an array we can iterate through $groups = array( 0 => array( 'alias' => 'admins' ), 1 => array( 'alias' => 'guests' ), 2 => array( 'alias' => 'mangers' ) ); //Iterate and create ARO groups foreach($groups as $data) { //Remember to call create() when saving in loops... $aro->create(); //Save data $aro->save($data); } }

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Select return dynamic columns

    - by Ascalonian
    I have two tables: Standards and Service Offerings. A Standard can have multiple Service Offerings. Each Standard can have a different number of Service Offerings associated to it. What I need to be able to do is write a view that will return some common data and then list the service offerings on one line. For example: Standard Id | Description | SO #1 | SO #2 | SO #3 | ... | SO #21 | SO Count 1 | One | A | B | C | ... | G | 21 2 | Two | A | | | ... | | 1 3 | Three | B | D | E | ... | | 3 I have no idea how to write this. The number of SO columns is set to a specific number (21 in this case), so we cannot exceed past that. Any ideas on how to approach this? A place I started is below. It just returned multiple rows for each Service Offering, when they need to be on one row. SELECT * FROM SERVICE_OFFERINGS WHERE STANDARD_KEY IN (SELECT STANDARD_KEY FROM STANDARDS)

    Read the article

  • UITableView cell appearance update not working - iPhone

    - by Jack Nutkins
    I have this piece of code which I'm using to set the alpha and accessibility of one of my tables cells dependent on a value stored in user defaults: - (void) viewDidAppear:(BOOL)animated{ [self reloadTableData]; } - (void) reloadTableData { if ([[userDefaults objectForKey:@"canDeleteReceipts"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteMileages"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteAll"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; NSLog(@"In Here"); } else { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } } The value stored in userDefaults is '0' so the cell in section 8 row 2 should be greyed out and disabled, however this doesn't happen and the cell is still selectable with an alpha of 1... The log statement 'In Here' is called so it seems to be executing the right code, but that code doesn't seem to be doing anything. Can anyone explain what I've done wrong? Thanks, Jack

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Foreign key relationships in Entity Framework

    - by Anders Svensson
    I'm trying to add an object created from Entity Data Model classes. I have a table called Users, which has turned into a User EDM class. And I also have a table Pages, which has become a Page EDM class. These tables have a foreign key relationship, so that each page is associated with many users. Now I want to be able to add a page, but I can't get how to do it. I get a nullreference exception on Users below. I'm still rather confused by all this, so I'm sure it's a simple error, but I just can't see how to do it. Also, by the way, the compiler requires that I set PageID in the object initializer, even though this field is set to be an automatic id in the table. Am I doing it right just setting it to 0, expecting it to be updated automatically in the table when saved, or how should I do that? Any help appreciated! The method in question: private Page GetPage(User currentUser) { string url = _request.ServerVariables["url"].ToLower(); var userPages = from p in _context.PageSet where p.Users.UserID == currentUser.UserID select p; var existingPage = userPages.FirstOrDefault(e => e.Url == url); //Could be combined with above, but hard to read? if (existingPage != null) return existingPage; Page page = new Page() { Count = 0, Url = _request.ServerVariables["url"].ToLower(), PageID = 0, //Only initial value, changed later? }; _context.AddToPageSet(page); page.Users.UserID = currentUser.UserID; //Here's the problem... return page; }

    Read the article

  • How create single HTML file to viewed in Excel with multiple sheets?

    - by Dmitry Nelepov
    I want know, is it possible create single HTML file wich (after chaging extension to xls) when opened at Excel will parsed to multiple sheets. Little example i got file test.xls with contents <html> <body> <table> <tr> <td> 1 </td> <td> 2 </td> <td> 3 </td> <td> =sum(A1:C1) </td> </tr> </table> </body> </html> When i open this file at Excel i got one Sheet with calculated sum of cells A1 to C1 in A4=6 I wonder, it's possible create a HTML file wich containts multiple tables wich will be parsed as multiple sheets at Excel. Here Excel view of this file

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • iBatis how to solve a more complex N+1 problem

    - by Alvin
    I have a database that is similar to the following: create table Store(storeId) create table Staff(storeId_fk, staff_id, staffName) create table Item(storeId_fk, itme_id, itemName) The Store table is large. And I have create the following java bean public class Store { List<Staff> myStaff List<Item> myItem .... } public class Staff { ... } public class Item { ... } My question is how can I use iBatis's result map to EFFICIENTLY map from the tables to the java object? I tried: <resultMap id="storemap" class="my.example.Store"> <result property="myStaff" resultMap="staffMap"/> <result property="myItem" result="itemMap"/> </resultMap> (other maps omitted) But it's way too slow since the Store table is VERY VERY large. I tried to follow the example in Clinton's developer guide for the N+1 solution, but I cannot warp my mind around how to use the "groupBy" for an object with 2 list... Any help is appreciated!

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • Generic Class Vb.net

    - by KoolKabin
    hi guys, I am stuck with a problem about generic classes. I am confused how I call the constructor with parameters. My interface: Public Interface IDBObject Sub [Get](ByRef DataRow As DataRow) Property UIN() As Integer End Interface My Child Class: Public Class User Implements IDBObject Public Sub [Get](ByRef DataRow As System.Data.DataRow) Implements IDBObject.Get End Sub Public Property UIN() As Integer Implements IDBObject.UIN Get End Get Set(ByVal value As Integer) End Set End Property End Class My Next Class: Public Class Users Inherits DBLayer(Of User) #Region " Standard Methods " #End Region End Class My DBObject Class: Public Class DBLayer(Of DBObject As {New, IDBObject}) Public Shared Function GetData() As List(Of DBObject) Dim QueryString As String = "SELECT * ***;" Dim Dataset As DataSet = New DataSet() Dim DataList As List(Of DBObject) = New List(Of DBObject) Try Dataset = Query(QueryString) For Each DataRow As DataRow In Dataset.Tables(0).Rows **DataList.Add(New DBObject(DataRow))** Next Catch ex As Exception DataList = Nothing End Try Return DataList End Function End Class I get error in the starred area of the DBLayer Object. What might be the possible reason? what can I do to fix it? I even want to add New(byval someval as datatype) in IDBObject interface for overloading construction. but it also gives an error? how can i do it? Adding Sub New(ByVal DataRow As DataRow) in IDBObject producess following error 'Sub New' cannot be declared in an interface. Error Produced in DBLayer Object line: DataList.Add(New DBObject(DataRow)) Msg: Arguments cannot be passed to a 'New' used on a type parameter.

    Read the article

  • JOIN (SELECT DISTINCT [..] substitute

    - by FRKT
    Hello, I'd like to find a substitute for using SELECT DISTINCT in a derived table. Let's say I have three tables: CREATE TABLE `trades` ( `tradeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `employeeID` int(11) unsigned NOT NULL, `corporationID` int(11) unsigned NOT NULL, `profit` int(11) NOT NULL, KEY `tradeID` (`tradeID`), KEY `employeeID` (`employeeID`), KEY `corporationID` (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `corporations` ( `corporationID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `employees` ( `employeeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`employeeID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 Let's say I'd like to find out how much profit a specific employee has generated. Simple: SELECT SUM(profit) FROM trades JOIN employees ON trades.employeeID = employees.employeeID AND employees.employeeID = 1; It gets trickier if I'd like to query how much revenue a specific corporation has, however. I cannot simply replicate the aforementioned query, because two or more employees from the same company might be involved in the same trade. This query should do the trick: SELECT SUM(profit) FROM trades JOIN (SELECT DISTINCT tradeID FROM trades WHERE trades.corporationID = 1) ... unfortunately, DISTINCT JOINs seem crazy ineffective. Is there any alternative I can use to determine how much revenue a corporation has, taking into account that a corporation might be listed several times with the same tradeID?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • C# sqlite syntax in ASP.NET?

    - by acidzombie24
    -edit- This is no longer relevant and the question doesnt make sense to me anymore. I think i wanted to know how to create tables or know if the syntax is the same from winform to ASP.NET I am very use to sqlite http://sqlite.phxsoftware.com/ and would like to create a DB in a similar style. How do i do this? it doesnt need to be the same, just similar enough for me to enjoy. An example of the syntax. connection = new SQLiteConnection("Data Source=" + sz +";Version=3"); command = new SQLiteCommand(connection); connection.Open(); command.CommandText = "CREATE TABLE if not exists miscInfo(key TEXT, " + "value TEXT, UNIQUE (key));"; command.ExecuteNonQuery(); The @name is a symbol and command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; replaces the symbol with escaped values/texts/blob command.CommandText = "INSERT INTO discData(rootfolderID, time, volumeName, discLabel, " + "userTitle, userDesc) "+ "VALUES(@rootfolderID, @time, @volumeName, @discLabel, " + "@userTitle, @userDesc); " + "SELECT last_insert_rowid() AS RecordID;"; command.Parameters.Add("@rootfolderID", DbType.Int64).Value = d.rootfolderID; command.Parameters.Add("@time", DbType.Int64).Value = d.time; command.Parameters.Add("@volumeName", DbType.String).Value = d.volumeName; command.Parameters.Add("@discLabel", DbType.String).Value = d.discLabel; command.Parameters.Add("@userTitle", DbType.String).Value = d.userTitle; command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; d.discID = (long)command.ExecuteScalar();

    Read the article

  • How to map a 0..1 to 1 relationship in Entity Framework 3

    - by sako73
    I have two tables, Users, and Address. A the user table has a field that maps to the primary key of the address table. This field can be null. In plain english, Address exist independent of other objects. A user may be associated with one address. In the database, I have this set up as a foreign key relationship. I am attempting to map this relationship in the Entity Framework. I am getting errors on the following code: <Association Name="fk_UserAddress"> <End Role="User" Type="GenesisEntityModel.Store.User" Multiplicity="1"/> <End Role="Address" Type="GenesisEntityModel.Store.Address" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="Address"> <PropertyRef Name="addressId"/> </Principal> <Dependent Role="User"> <PropertyRef Name="addressId"/> </Dependent> </ReferentialConstraint> </Association> It is giving a "The Lower Bound of the multiplicity must be 0" error. I would appreciate it if anyone could explain the error, and the best way to solve it. Thanks for any help.

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >