Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 302/345 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • SQL Grouping with multiple joins combining results incorrectly

    - by Matt
    Hi I'm having trouble with my query combining records when it shouldn't. I have two tables Authors and Publications, they are related by Publication ID in a many to many relationship. As each author can have many publications and each publication has many Authors. I want my query to return every publication for a set of authors and include the ID of each of the other authors that have contributed to the publication grouped into one field. (I am working with mySQL) I have tried to picture it graphically below Table: authors Table:publications AuthorID | PublicationID PublicationID | PublicationName 1 | 123 123 | A 1 | 456 456 | B 2 | 123 789 | C 2 | 789 3 | 123 3 | 456 I want my result set to be the following AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,2,3 1 | 456 | B | 1,3 2 | 123 | A | 1,2,3 2 | 789 | C | 2 3 | 123 | A | 1,2,3 3 | 456 | B | 1,3 This is my query Select Author1.AuthorID, Publications.PublicationID, Publications.PubName, GROUP_CONCAT(TRIM(Author2.AuthorID)ORDER BY Author2.AuthorID ASC)AS 'AuthorsAll' FROM Authors AS Author1 LEFT JOIN Authors AS Author2 ON Author1.PublicationID = Author2.PublicationID INNER JOIN Publications ON Author1.PublicationID = Publications.PublicationID WHERE Author1.AuthorID ="1" OR Author1.AuthorID ="2" OR Author1.AuthorID ="3" GROUP BY Author2.PublicationID But it returns the following instead AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,1,1,2,2,2,3,3,3 1 | 456 | B | 1,1,3,3 2 | 789 | C | 2 It does deliver the desired output when there is only one AuhorID in the where statement. I have not been able to figure it out, does anyone know where i'm going wrong?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • iBatis how to solve a more complex N+1 problem

    - by Alvin
    I have a database that is similar to the following: create table Store(storeId) create table Staff(storeId_fk, staff_id, staffName) create table Item(storeId_fk, itme_id, itemName) The Store table is large. And I have create the following java bean public class Store { List<Staff> myStaff List<Item> myItem .... } public class Staff { ... } public class Item { ... } My question is how can I use iBatis's result map to EFFICIENTLY map from the tables to the java object? I tried: <resultMap id="storemap" class="my.example.Store"> <result property="myStaff" resultMap="staffMap"/> <result property="myItem" result="itemMap"/> </resultMap> (other maps omitted) But it's way too slow since the Store table is VERY VERY large. I tried to follow the example in Clinton's developer guide for the N+1 solution, but I cannot warp my mind around how to use the "groupBy" for an object with 2 list... Any help is appreciated!

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • How can I use a compound condition in a join in Linq?

    - by Gary McGill
    Let's say I have a Customer table which has a PrimaryContactId field and a SecondaryContactId field. Both of these are foreign keys that reference the Contact table. For any given customer, either one or two contacts may be stored. In other words, PrimaryContactId can never be NULL, but SecondaryContactId can be NULL. If I drop my Customer and Contact tables onto the "Linq to SQL Classes" design surface, the class builder will spot the two FK relationships from the Customer table to the Contact table, and so the generated Customer class will have a Contact field and a Contact1 field (which I can rename to PrimaryContact and SecondaryContact to avoid confusion). Now suppose that I want to get details of all the contacts for a given set of customers. If there was always exactly one contact then I could write something like: from customer in customers join contact in contacts on customer.PrimaryContactId equals contact.id select ... ...which would be translated into something like: SELECT ... FROM Customer INNER JOIN Contact ON Customer.FirstSalesPersonId = Contact.id But, because I want to join on both the contact fields, I want the SQL to look something like: SELECT ... FROM Customer INNER JOIN Contact ON Customer.FirstSalesPersonId = Contact.id OR Customer.SecondSalesPersonId = Contact.id How can I write a Linq expression to do that?

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • CakePHP: Need help using saveField to update a fields in a belongsTo model

    - by afrisch
    I am trying to update a password into two different models/tables in CakePHP. I can update it fine in the parent model, but not the second model. Models: Users (hasOne GameProfile) PK=id Gameprofiles (belongsTo User) FK=user_id Here is a stripped down version of my function in the Users_controller.php: function updatepass() { if (!empty($this->data)) { $this->User->id = $this->Auth->user('id'); $this->User->saveField('sha1password', $this->Auth->password($this->data['User']['newpass'])); $this->User->Gameprofile->saveField('plainpassword', $this->data['User']['newpass']); } } When I execute the function, the users table is updated fine. But the gameprofile table is not updated, rather Cake does an insert. SQL Query Log: 1195 Query UPDATE `users` SET `sha1password` = 'e9443e9f5e1a07832aad1b2f84de1a666daf89b5' WHERE `users`.`id` = 30 1195 Query INSERT INTO `gameprofiles` (`plainpassword`) VALUES ('abc') Is there a way to get CakePHP to do an update using saveField on a model with a belongsTo attribute? I've tried various ways to refer to user_id before executing the second saveField, but just can't seem to find the winning combination. Any help is greatly appreciated!

    Read the article

  • Howw to add new value with generic Repository if there are foreign keys (EF-4)?

    - by Phsika
    i try to write a kind of generic repository to add method. Everything is ok to add but I have table which is related with two tables with FOREIGN KEY.But Not working because of foreign key public class DomainRepository<TModel> : IDomainRepository<TModel> where TModel : class { #region IDomainRepository<T> Members private ObjectContext _context; private IObjectSet<TModel> _objectSet; public DomainRepository() { } public DomainRepository(ObjectContext context) { _context = context; _objectSet = _context.CreateObjectSet<TModel>(); } //do something..... public TModel Add<TModel>(TModel entity) where TModel : IEntityWithKey { EntityKey key; object originalItem; key = _context.CreateEntityKey(entity.GetType().Name, entity); _context.AddObject(key.EntitySetName, entity); _context.SaveChanges(); return entity; } //do something..... } Calling REPOSITORY: //insert-update-delete public partial class AddtoTables { public table3 Add(int TaskId, int RefAircraftsId) { using (DomainRepository<table3> repTask = new DomainRepository<table3>(new TaskEntities())) { return repTask.Add<table3>(new table3() { TaskId = TaskId, TaskRefAircraftsID = RefAircraftsId }); } } } How to add a new value if this table includes foreign key relation

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Javascript can't find element by id?

    - by Bluemagica
    <html> <head> <title>Test javascript</title> <script type="text/javascript"> var e = document.getElementById("db_info"); e.innerHTML='Found you'; </script> </head> <body> <div id="content"> <div id="tables"> </div> <div id="db_info"> </div> </div> </body> </html> If I use alert(e); it turns up null.... and obviously I don't get any "found you" on screen. What am I doing wrong?

    Read the article

  • Use of unassigned local variable 'xxx'

    - by Tomislav
    I'm writing a database importer from our competitors to ours database:) I have a code generator which create Methods form import to our database like public void Test_Import_Customer_1() // variables string conn; string sqlSelect; string sqlInsert; int extID; string name; string name2; DateTime date_inserted; sqlSelect="select id,name,date_inserted from table_competitors_1"; oledbreader reader = new GetOledbRader(sqlString,conn); while (reader.read()) { name=left((string)myreader["name"],50); //limitation of my field date_inserted=myreader["date_inserted"]; sqlInsert=string.Format("insert into table(name,name2,date_inserted)values '{0}', '{1}', {2})",name,name2,date_inserted); //here is the problem name2 "Use of unassigned local variable" ExecuteSQL(sqlInsert) } As different companies database has different fields i can not set value to each variable and there is a big number of tables to go one variable to next. like sqlSelect_Company_1 = "select name,date_inserted from table_1"; sqlSelect_Company_2 = "select name,name2 from table_2"; is there a way to override the typing of each variable one by one with default values?

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • Kohana Auth Library Deployment

    - by Steve
    My Kohana app runs perfectly on my local machine. When I deployed my app to a server (and adjust the config files appropriately), I can no longer log into the app. I've traced through the app login routine on both my local version and the server version and they both agree with each other all the way through until you get to the auth.php controller logged_in() routine where suddenly, at line 140 - the is_object($this-user) test - the $user object no longer exists!?!?!? The login() function call that calls the logged_in() function successfully passes the following test, which causes a redirect to the logged_in() function. if(Auth::instance()->login($user, $post['password'])) Yes, the password and hash, etc all work perfectly. Here is the offending code: public function logged_in() { if ( ! is_object($this->user)) { // No user is currently logged in url::redirect('auth/login'); } etc... } As the code is the same between my local installation and the server, I reckon it must be some server setting that is messing with me. FYI: All the rest of the code works because I have a temporary backdoor available that allows me to use the application (view pages of tables, etc) without being logged in. Any ideas?

    Read the article

  • Specify Columntype in nhibernate

    - by Bipul
    I have a class CaptionItem public class CaptionItem { public virtual int SystemId { get; set; } public virtual int Version { get; set; } protected internal virtual IDictionary<string, string> CaptionValues {get; private set;} } I am using following code for nHibernate mapping Id(x => x.SystemId); Version(x => x.Version); Cache.ReadWrite().IncludeAll(); HasMany(x => x.CaptionValues) .KeyColumn("CaptionItem_Id") .AsMap<string>(idx => idx.Column("CaptionSet_Name"), elem => elem.Column("Text")) .Not.LazyLoad() .Cascade.Delete() .Table("CaptionValue") .Cache.ReadWrite().IncludeAll(); So in database two tables get created. One CaptionValue and other CaptionItem. In CaptionItem table has three columns 1. CaptionItem_Id int 2. Text nvarchar(255) 3. CaptionSet_Name nvarchar(255) Now, my question is how can I make the columnt type of Text to nvarchar(max)? Thanks in advance.

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • EFv1 mapping 1 to many Relationship to POCOs

    - by Scott
    I'm trying to work through a problem where I'm mapping EF Entities to POCO which serve as DTO. I have two tables within my database, say Products and Categories. A Product belongs to one category and one category may contain many Products. My EF entities are named efProduct and efCategory. Within each entity there is the proper Navigation Property between efProduct and efCategory. My Poco objects are simple public class Product { public string Name { get; set; } public int ID { get; set; } public double Price { get; set; } public Category ProductType { get; set; } } public class Category { public int ID { get; set; } public string Name { get; set; } public List<Product> products { get; set; } } To get a list of products I am able to do something like public IQueryable<Product> GetProducts() { return from p in ctx.Products select new Product { ID = p.ID, Name = p.Name, Price = p.Price ProductType = p.Category }; } However there is a type mismatch error because p.Category is of type efCategory. How can I resolve this? That is, how can I convert p.Category to type Category? I know in .NET EF has added support for POCO, but I'm forced to use .NET 3.5 SP1.

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • How do I make software that preserves database integrity and correctness? Please help, confused.

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >