Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 44/55 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • Symfony dynamic forms

    - by Asier
    Hi there, I started with a form, which is made by hand because of it's complexity (it's a javascript modified form, with sortable parts, etc). The problem is that now I need to do the validation, and it's a total mess to do it from scratch in the action using the sfValidator* classes. So, I am thinking to do it using sfForm so that my form validation and error handling can be done more easier and so I can reuse this form for the Edit and Create pages. The form is something like this: <form> <input name="form[year]"/> <textarea name="form[description]"></textarea> <div class="sortable"> <div class="item"> <input name="form[items][0][name]"/> <input name="form[items][0][age]"/> </div> <div class="item"> <input name="form[items][1][name]"/> <input name="form[items][1][age]"/> </div> </div> </form> The thing is that the sortable part of the form can be expanded from 2 to N elements on the client side. So that it has variable items quantity which can be reordered. How can I approach this problem? Any ideas are welcome, thank you. :)

    Read the article

  • Best ways to copyright protect your work for example Myows

    - by Saif Bechan
    Recently I have read about Myows, they say its: "The universal copyright management and protection app for smart creatives" It is used to protect your application from copyrights and more. Do you think this will be a good idea for large application, or are there better ways to achieve such a thing. url: Myows Update Referencing: http://stackoverflow.com/questions/2618015/has-anyone-tried-myows-to-copyright-protect-your-work/2618628#2618628 Wow there is no better person to have answered this question like the creator himself. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so 'new'. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it will works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Convert HTML + CSS to PDF with PHP?

    - by cletus
    Ok, I'm now banging my head against a brick wall with this one. I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML. I'm now after a way of converting it to PDF. I have tried: DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points; HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me. I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver. I really can't believe I'm this stuck. Am I missing something?

    Read the article

  • Do you like Twisted?

    - by Luca
    I use Python Twisted for web development, and I don't like it? I know async programming is a great idea, I know there are may async web servers now, I know it's the only way to solve some problems you'd have with threads but I don't like. The problem is that, you're forced to program in a twisted way. So, the architecture you have in mind, very often have to be modified to fit the way twisted works. The architecture have to follow the technology, I don't think this is good. When we use callback in javascript, we don't have too many difficulties: things are usually simpler, we use a callback in response to an Ajax call. But in a server web app things are, very often, a bit more complex. Writing chain of callbacks don't seem to me a wonderful way of programming. The code is not simple, and so it is difficult to understand and to maintain. Writing twisted code we very often lost the linear intuitive idea of the algorithm we wanted to implement, especially when things grow in complexity. What's your point of view?

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Grails or Play! for an ex-RoR developer ?

    - by Kedare
    Hello, I plan to begin learning a Java web framework (I love the Java API), I already used Rails and Django. I want something close to Java, but without all the complexity of J2EE. I've found 2 framework that could be good for me : Grails : Grails looks great, it use Groovy that is better than Java for web application (I think..), but it's slower, that use pure-java components (Hibernate, Strut, Spring), it looks pretty simple to deploy (send .war and it's ok !), the GSP is great ! It's a bit harder to debug (need to restart the server at each modification, and the stacktrace is a mix of Java and Groovy stack that is not always understandable..) Play! : This framework also looks great, it's faster than Grails (It's use directly Java), but I don't really like how it use Java, it modify the source code to transform the properties call as setXXX/getXXX, I'm not kind of that... The framework also have caching function that Grails don't (alreary) has. I don't really like the Template Engine. It's also easer to debug (no need to restart the server, and the stacktrace is clear) What do you recommend for ? I am looking for something easy to learn (I used a lot ruby and java, but a little bit java (But I love the Java API)), that is full featured (That's no a problem with all the Java Library availables, but if it's bundle and integrated I prefer), that scale and that is not too slow (faster than ruby), and if possible I would want something with a decent community to easily find support and answer to my questions ;) PS: No JRuby on Rails Thank you !

    Read the article

  • How to use MSBuild to target multiple versions of .NET Framework?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/soln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software (e.g. NAnt). I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

  • How to functionally generate a tree breadth-first. (With Haskell)

    - by Dennetik
    Say I have the following Haskell tree type, where "State" is a simple wrapper: data Tree a = Branch (State a) [Tree a] | Leaf (State a) deriving (Eq, Show) I also have a function "expand :: Tree a - Tree a" which takes a leaf node, and expands it into a branch, or takes a branch and returns it unaltered. This tree type represents an N-ary search-tree. Searching depth-first is a waste, as the search-space is obviously infinite, as I can easily keep on expanding the search-space with the use of expand on all the tree's leaf nodes, and the chances of accidentally missing the goal-state is huge... thus the only solution is a breadth-first search, implemented pretty decent over here, which will find the solution if it's there. What I want to generate, though, is the tree traversed up to finding the solution. This is a problem because I only know how to do this depth-first, which could be done by simply called the "expand" function again and again upon the first child node... until a goal-state is found. (This would really not generate anything other then a really uncomfortable list.) Could anyone give me any hints on how to do this (or an entire algorithm), or a verdict on whether or not it's possible with a decent complexity? (Or any sources on this, because I found rather few.)

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • How is this algorithm, for finding maximum path on a Directed Acyclical Graph, called?

    - by Martín Fixman
    Since some time, I'm using an algorithm that runs in complexity O(V + E) for finding maximum path on a Directed Acyclical Graph from point A to point B, that consists on doing a flood fill to find what nodes are accessible from note A, and how many "parents" (edges that come from other nodes) each node has. Then, I do a BFS but only "activating" a node when I already had used all its "parents". queue <int> a int paths[] ; //Number of paths that go to note i int edge[][] ; //Edges of a int mpath[] ; //max path from 0 to i (without counting the weight of i) int weight[] ; //weight of each node mpath[0] = 0 a.push(0) while not empty(a) for i in edge[a] paths[i] += 1 a.push(i) while not empty(a) for i in children[a] mpath[i] = max(mpath[i], mpath[a] + weight[a]) ; paths[i] -= 1 ; if path[i] = 0 a.push(i) ; Is there any special name for this algorithm? I told it to an Informatics professor, he just called it "Maximum Path on a DAG", but it doesn't sound good when you say "I solved the first problem with a Fenwick Tree, the second with Dijkstra, and the third with Maximum Path".

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Data Structures for Junior Java Developer

    - by user1639637
    Ok,still learning Arrays. I wrote this code which fills the array named "rand" with random numbers between 0 and 1( exclusive). I want to start learning Complexity. the For loop executes n times (100 times) ,every time it takes O(1) time,so the worse case scenario is O(n),am I right? Also,I used ArrayList to store the 100 elements and I imported "Collections" and used Collections.sort() method to sort the elements. import java.util.Arrays; public class random { public static void main(String args[]) { double[] rand=new double[10]; for(int i=0;i<rand.length;i++) { rand[i]=(double) Math.random(); System.out.println(rand[i]); } Arrays.sort(rand); System.out.println(Arrays.toString(rand)); } } ArrayList: import java.util.ArrayList; import java.util.Collections; public class random { public static void main(String args[]) { ArrayList<Double> MyArrayList=new ArrayList<Double>(); for(int i=0;i<100;i++) { MyArrayList.add(Math.random()); } Collections.sort(MyArrayList); for(int j=0;j<MyArrayList.size();j++) { System.out.println(MyArrayList.get(j)); } } }

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Doing without partial commits the "Mercurial way"

    - by David Moles
    Subversion shop considering switching to Mercurial, trying to figure out in advance what all the complaints from developers are going to be. There's one fairly common use case here that I can't see how to handle. I'm working on some largish feature, and I have a significant part of the code -- or possibly several significant parts of the code -- in pieces all over the garage floor, totally unsuitable for checkin, maybe not even compiling. An urgent bugfix request comes in. The fix is nice and local and doesn't touch any of the code I've been working on. I make the fix in my working copy. Now what? I've looked at "Mercurial cherry picking changes for commit" and "best practices in mercurial: branch vs. clone, and partial merges?" and all the suggestions seem to be extensions of varying complexity, from Record and Shelve to Queues. The fact that there apparently isn't any core functionality for this makes me suspect that in some sense this working style is Doing It Wrong. What would a Mercurial-like solution to this use case look like?

    Read the article

  • Namespace constraint with generic class decleration

    - by SomeGuy
    Good afternoon people, I would like to know if (and if so how) it is possible to define a namespace as a constraint parameter in a generic class declaration. What I have is this: namespace MyProject.Models.Entities <-- Contains my classes to be persisted in db namespace MyProject.Tests.BaseTest <-- Obvious i think Now the decleration of my 'BaseTest' class looks like so; public class BaseTest<T> This BaseTest does little more (at the time of writing) than remove all entities that were added to the database during testing. So typically I will have a test class declared as: public class MyEntityRepositoryTest : BaseTest<MyEntity> What i would LIKE to do is something similar to the following: public class BaseTest<T> where T : <is of the MyProject.Models.Entities namespace> Now i am aware that it would be entirely possible to simply declare a 'BaseEntity' class from which all entities created within the MyProject.Models.Entities namespace will inherit from; public class BaseTest<T> where T : MyBaseEntity but...I dont actually need to, or want to. Plus I am using an ORM and mapping entities with inheritance, although possible, adds a layer of complexity that is not required. So, is it possible to constrain a generic class parameter to a namespace and not a specific type ? Thank you for your time.

    Read the article

  • how to elegantly duplicate a graph (neural network)

    - by macias
    I have a graph (network) which consists of layers, which contains nodes (neurons). I would like to write a procedure to duplicate entire graph in most elegant way possible -- i.e. with minimal or no overhead added to the structure of the node or layer. Or yet in other words -- the procedure could be complex, but the complexity should not "leak" to structures. They should be no complex just because they are copyable. I wrote the code in C#, so far it looks like this: neuron has additional field -- copy_of which is pointer the the neuron which base copied from, this is my additional overhead neuron has parameterless method Clone() neuron has method Reconnect() -- which exchanges connection from "source" neuron (parameter) to "target" neuron (parameter) layer has parameterless method Clone() -- it simply call Clone() for all neurons network has parameterless method Clone() -- it calls Clone() for every layer and then it iterates over all neurons and creates mappings neuron=copy_of and then calls Reconnect to exchange all the "wiring" I hope my approach is clear. The question is -- is there more elegant method, I particularly don't like keeping extra pointer in neuron class just in case of being copied! I would like to gather the data in one point (network's Clone) and then dispose it completely (Clone method cannot have an argument though).

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >