Search Results

Search found 1366 results on 55 pages for 'complexity'.

Page 43/55 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • What options to parse a DTD using PHP

    - by Chadwick
    I need to parse DTDs using PHP and am hoping there's a simple library to help out. Each DTD has numerous <!ENTITY... and <!-- Comment... elements, which I need to act upon. Note that I do not need to validate anything against these DTDs, simply parse them as data files themselves. A few options I've looked at: James Clarke's SD, which is an option of last resort, but I'd like to avoid the complexity of building/installing/configuring code external to PHP. I'm not sure it's even possible in my situation. PEAR has an XML_DTD_Parser, which requires installing/configuring PEAR and a number of pear modules, which I'm also not sure is possible, and would rather avoid. Has anyone used it with success? PHP XML Classes has the class_path_parser, which another site suggested, but it fails to read ENTITY elements. It appears to be using PHP's built in XML parsing capabilities, which use EXPAT. PHP's DOMDocument will validate against a DTD, so must be able to read them, though I don't see how to get at the DTD parser directly at first glance.

    Read the article

  • Objective-C Implementation Pointers

    - by Dwaine Bailey
    Hi, I am currently writing an XML parser that parses a lot of data, with a lot of different nodes (the XML isn't designed by me, and I have no control over the content...) Anyway, it currently takes an unacceptably long time to download and read in (about 13 seconds) and so I'm looking for ways to increase the efficiency of the read. I've written a function to create hash values, so that the program no longer has to do a lot of string comparison (just NSUInteger comparison), but this still isn't reducing the complexity of the read in... So I thought maybe I could create an array of IMPs so that, I could then go something like: for(int i = 0; i < [hashValues count]; i ++) { if(currHash == [[hashValues objectAtIndex:i] unsignedIntValue]) { [impArray objectAtIndex:i]; } } Or something like that. The only problem is that I don't know how to actually make the call to the IMP function? I've read that I perform the selector that an IMP defines by going IMP tImp = [impArray objectAtIndex:i]; tImp(self, @selector(methodName)); But, if I need to know the name of the selector anyway, what's the point? Can anybody help me out with what I want to do? Or even just some more ways to increase the efficiency of the parser...

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • What libraries will parse a DTD using PHP

    - by Chadwick
    I need to parse DTDs using PHP and am hoping there's a simple library to help out. Each DTD has numerous <!ENTITY... and <!-- Comment... elements, which I need to act upon. Note that I do not need to validate anything against these DTDs, simply parse them as data files themselves. A few options I've looked at: James Clarke's SD, which is an option of last resort, but I'd like to avoid the complexity of building/installing/configuring code external to PHP. I'm not sure it's even possible in my situation. PEAR has an XML_DTD_Parser, which requires installing/configuring PEAR and a number of pear modules, which I'm also not sure is possible, and would rather avoid. Has anyone used it with success? PHP XML Classes has the class_path_parser, which another site suggested, but it fails to read ENTITY elements. It appears to be using PHP's built in XML parsing capabilities, which use EXPAT. PHP's DOMDocument will validate against a DTD, so must be able to read them, though I don't see how to get at the DTD parser directly at first glance.

    Read the article

  • VB.NET vs. C#.NET?

    - by Onion-Knight
    Hello everyone, The company I work for has all of its legacy ("legacy" being used rather liberally in this context) code in VB.NET. They have about 6000+ lines of VB.NET code, so all of the developers are comfortable with it. We have started to develop a new product, and are finding that some modules are easier to complete in C# than in VB.NET, such as Interprocess Communication via WCF. The things our product will eventually need to do are as follows: Communicate via IPC between Windows Services, Silverlight, and WinForms Handle parallization, and all the complexity that comes along with it Windows Service and WinForms development ASP.NET, AJAX, and Silverlight development Database (SQL) access Lots of event handling (Sync and Async events) My question is: Given the type of work we will be doing to complete our product, are there features of one language that will make life easier that the other does not have? And if so, it is worth asking the developers to switch to a language they are less comfortable with? I was hoping to keep this as objective as possible, by listing exactly what type of work we will be doing with the product. Please don't turn this into a VB/C# holy war. Thanks, Onion-Knight

    Read the article

  • Search implementation dilemma: full text vs. plain SQL

    - by Ethan
    I have a MySQL/Rails app that needs search. Here's some info about the data: Users search within their own data only, so searches are narrowed down by user_id to begin with. Each user will have up to about five thousand records (they accumulate over time). I wrote out a typical user's records to a text file. The file size is 2.9 MB. Search has to cover two columns: title and body. title is a varchar(255) column. body is column type text. This will be lightly used. If I average a few searches per second that would be surprising. It's running an a 500 MB CentOS 5 VPS machine. I don't want relevance ranking or any kind of fuzziness. Searches should be for exact strings and reliably return all records containing the string. Simple date order -- newest to oldest. I'm using the InnoDB table type. I'm looking at plain SQL search (through the searchlogic gem) or full text search using Sphinx and the Thinking Sphinx gem. Sphinx is very fast and Thinking Sphinx is cool, but it adds complexity, a daemon to maintain, cron jobs to maintain the index. Can I get away with plain SQL search for a small scale app?

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • What is the best software design to use in this scenario

    - by domdefelice
    I need to generate HTML snippets using jQuery. The creation of those snippets depends on some data. The data is stored server-side, in session (where PHP is used). At the moment I achieved this - retrieving the data from the server via AJAX in form of JSON - and building the snippets via specific javascript functions that read those data The problem is that the complexity of the data is getting bigger and hence the serialization into JSON is getting even more difficult since I can't do it automatically. I can't do it automatically because some information are sensible so I generate a "stripped" version to send to the client. I know it is difficult to understand without any code to read, but I am hoping this is a common scenario and would be glad for any tip, suggestion or even design-pattern you can give me. Should I store both a complete and a stripped data on the server and then use some library to automatically generate the JSON from the stripped data? But this also means I have to get the two data synchronized. Or maybe I could move the logic server-side, this way avoiding sending the data. But this means sending javascript code (since I rely on jQuery). Maybe not a good idea. Feel free to ask me more details if this is not clear. Thank you for any help

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • latin1/unicode conversion problem with ajax request and special characters

    - by mfn
    Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord). Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log. According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode. However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too. That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ). My goal is to find an automated solution, but maybe I'm trying to be überclever in this case? I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters. The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping. thanks

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Symfony dynamic forms

    - by Asier
    Hi there, I started with a form, which is made by hand because of it's complexity (it's a javascript modified form, with sortable parts, etc). The problem is that now I need to do the validation, and it's a total mess to do it from scratch in the action using the sfValidator* classes. So, I am thinking to do it using sfForm so that my form validation and error handling can be done more easier and so I can reuse this form for the Edit and Create pages. The form is something like this: <form> <input name="form[year]"/> <textarea name="form[description]"></textarea> <div class="sortable"> <div class="item"> <input name="form[items][0][name]"/> <input name="form[items][0][age]"/> </div> <div class="item"> <input name="form[items][1][name]"/> <input name="form[items][1][age]"/> </div> </div> </form> The thing is that the sortable part of the form can be expanded from 2 to N elements on the client side. So that it has variable items quantity which can be reordered. How can I approach this problem? Any ideas are welcome, thank you. :)

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • Best ways to copyright protect your work for example Myows

    - by Saif Bechan
    Recently I have read about Myows, they say its: "The universal copyright management and protection app for smart creatives" It is used to protect your application from copyrights and more. Do you think this will be a good idea for large application, or are there better ways to achieve such a thing. url: Myows Update Referencing: http://stackoverflow.com/questions/2618015/has-anyone-tried-myows-to-copyright-protect-your-work/2618628#2618628 Wow there is no better person to have answered this question like the creator himself. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so 'new'. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it will works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Convert HTML + CSS to PDF with PHP?

    - by cletus
    Ok, I'm now banging my head against a brick wall with this one. I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML. I'm now after a way of converting it to PDF. I have tried: DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points; HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me. I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver. I really can't believe I'm this stuck. Am I missing something?

    Read the article

  • Do you like Twisted?

    - by Luca
    I use Python Twisted for web development, and I don't like it? I know async programming is a great idea, I know there are may async web servers now, I know it's the only way to solve some problems you'd have with threads but I don't like. The problem is that, you're forced to program in a twisted way. So, the architecture you have in mind, very often have to be modified to fit the way twisted works. The architecture have to follow the technology, I don't think this is good. When we use callback in javascript, we don't have too many difficulties: things are usually simpler, we use a callback in response to an Ajax call. But in a server web app things are, very often, a bit more complex. Writing chain of callbacks don't seem to me a wonderful way of programming. The code is not simple, and so it is difficult to understand and to maintain. Writing twisted code we very often lost the linear intuitive idea of the algorithm we wanted to implement, especially when things grow in complexity. What's your point of view?

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >