Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 122/151 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Converting an int64 value to a Number object in JavaScript

    - by Matt
    I have a COM object which has a method that returns an unsigned int64 (VT_UI8) value. We have an HTML page which contains some JavaScript which can load the COM object and make the call to that method, to retrieve the value as such: var foo = MyCOMObject.GetInt64Value(); This value can easily be displayed to the user in a message dialog using: alert(foo); or displayed on the page by: document.getElementById('displayToUser').innerHTML = foo; However, we cannot use this value as a Number (e.g. if we try to multiply it by 2) without the page throwing "Number expected" errors. If we check "typeof(foo)" it returns "unknown". I've found a workaround for this by doing the following: document.getElementById('displayToUser').innerHTML = foo; var bar = parseInt(document.getElementById('displayToUser').innerHTML); alert(bar*2); What I need to know is how to make that process more efficient. Specifically, is there a way to cast foo to a String explicitly, rather than having to set some document element's innerHTML to foo and then retrieve it from that. I wouldn't mind calling something like: alert(parseInt((string)foo) * 2); Even better would be if there is a way to directly convert the int64 to a Number, without going through the String conversion, but I hold out less hope for that.

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • Django stupid mark_safe?

    - by Mark
    I wrote this little function for writing out HTML tags: def html_tag(tag, content=None, close=True, attrs={}): lst = ['<',tag] for key, val in attrs.iteritems(): lst.append(' %s="%s"' % (key, escape_html(val))) if close: if content is None: lst.append(' />') else: lst.extend(['>', content, '</', tag, '>']) else: lst.append('>') return mark_safe(''.join(lst)) Which worked great, but then I read this article on efficient string concatenation (I know it doesn't really matter for this, but I wanted consistency) and decided to update my script: def html_tag(tag, body=None, close=True, attrs={}): s = StringIO() s.write('<%s'%tag) for key, val in attrs.iteritems(): s.write(' %s="%s"' % (key, escape_html(val))) if close: if body is None: s.write(' />') else: s.write('>%s</%s>' % (body, tag)) else: s.write('>') return mark_safe(s.getvalue()) But now my HTML get escaped when I try to render it from my template. Everything else is exactly the same. It works properly if I replace the last line with return mark_safe(unicode(s.getvalue())). I checked the return type of s.getvalue(). It should be a str, just like the first function, so why is this failing?? Also fails with SafeString(s.getvalue()) but succeeds with SafeUnicode(s.getvalue()). I'd also like to point out that I used return mark_safe(s.getvalue()) in a different function with no odd behavior.

    Read the article

  • PHP OOP: Providing Domain Entities with "Identity"

    - by sunwukung
    Bit of an abstract problem here. I'm experimenting with the Domain Model pattern, and barring my other tussles with dependencies - I need some advice on generating Identity for use in an Identity Map. In most examples for the Data Mapper pattern I've seen (including the one outlined in this book: http://apress.com/book/view/9781590599099) - the user appears to manually set the identity for a given Domain Object using a setter: $UserMapper = new UserMapper; //returns a fully formed user object from record sets $User = $UserMapper->find(1); //returns an empty object with appropriate properties for completion $UserBlank = $UserMapper->get(); $UserBlank->setId(); $UserBlank->setOtherProperties(); Now, I don't know if I'm reading the examples wrong - but in the first $User object, the $id property is retrieved from the data store (I'm assuming $id represents a row id). In the latter case, however, how can you set the $id for an object if it has not yet acquired one from the data store? The problem is generating a valid "identity" for the object so that it can be maintained via an Identity Map - so generating an arbitrary integer doesn't solve it. My current thinking is to nominate different fields for identity (i.e. email) and demanding their presence in generating blank Domain Objects. Alternatively, demanding all objects be fully formed, and using all properties as their identity...hardly efficient. (Or alternatively, dump the Domain Model concept and return to DBAL/DAO/Transaction Scripts...which is seeming increasingly elegant compared to the ORM implementations I've seen...)

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Excel CSV into Nested Dictionary; List Comprehensions

    - by victorhooi
    heya, I have a Excel CSV files with employee records in them. Something like this: mail,first_name,surname,employee_id,manager_id,telephone_number [email protected],john,smith,503422,503423,+65(2)3423-2433 [email protected],george,brown,503097,503098,+65(2)3423-9782 .... I'm using DictReader to put this into a nested dictionary: import csv gd_extract = csv.DictReader(open('filename 20100331 original.csv'), dialect='excel') employees = dict([(row['employee_id'], row) for row in gp_extract]) Is the above the proper way to do it - it does work, but is it the Right Way? Something more efficient? Also, the funny thing is, in IDLE, if I try to print out "employees" at the shell, it seems to cause IDLE to crash (there's approximately 1051 rows). 2. Remove employee_id from inner dict The second issue issue, I'm putting it into a dictionary indexed by employee_id, with the value as a nested dictionary of all the values - however, employee_id is also a key:value inside the nested dictionary, which is a bit redundant? Is there any way to exclude it from the inner dictionary? 3. Manipulate data in comprehension Thirdly, we need do some manipulations to the imported data - for example, all the phone numbers are in the wrong format, so we need to do some regex there. Also, we need to convert manager_id to an actual manager's name, and their email address. Most managers are in the same file, while others are in an external_contractors CSV, which is similar but not quite the same format - I can import that to a separate dict though. Are these two items things that can be done within the single list comprehension, or should I use a for loop? Or does multiple comprehensions work? (sample code would be really awesome here). Or is there a smarter way in Python do it? Cheers, Victor

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • Python template engine

    - by jturo
    Hello there, Could it be possible if somebody could help me get started in writing a python template engine? I'm new to python and as I learn the language I've managed to write a little MVC framework running in its own light-weight-WSGI-like server. I've managed to write a script that finds and replaces keys for values: (Obviously this is not how my script is structured or implemented. this is just an example) from string import Template html = '<html>\n' html += ' <head>\n' html += ' <title>This is so Cool : In Controller HTML</title>\n' html += ' </head>\n' html += ' <body>\n' html += ' Home | <a href="/hi">Hi ${name}</a>\n' html += ' </body>\n' html += '<html>' Template(html).safe_substitute(dict(name = 'Arturo')) My next goal is to implement custom statements, modifiers, functions, etc (like 'for' loop) but i don't really know if i should use another module that i don't know about. I thought of regular expressions but i kind feel like that wouldn't be an efficient way of doing it Any help is appreciated, and i'm sure this will be of use to other people too. Thank you

    Read the article

  • python histogram one-liner

    - by mykhal
    there are many ways, how to code histogram in Python. by histogram, i mean function, counting objects in an interable, resulting in the count table (i.e. dict). e.g.: >>> L = 'abracadabra' >>> histogram(L) {'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2} it can be written like this: def histogram(L): d = {} for x in L: if x in d: d[x] += 1 else: d[x] = 1 return d ..however, there are much less ways, how do this in a single expression. if we had "dict comprehensions" in python, we would write: >>> { x: L.count(x) for x in set(L) } but we don't have them, so we have to write: >>> dict([(x, L.count(x)) for x in set(L)]) however, this approach may yet be readable, but is not efficient - L is walked-through multiple times, so this won't work for single-life generators.. the function should iterate well also through gen(), where: def gen(): for x in L: yield x we can go with reduce (R.I.P.): >>> reduce(lambda d,x: dict(d, x=d.get(x,0)+1), L, {}) # wrong! oops, does not work, the key name is 'x', not x :( i ended with: >>> reduce(lambda d,x: dict(d.items() + [(x, d.get(x, 0)+1)]), L, {}) (in py3k, we would have to write list(d.items()) instead of d.items(), but it's hypothethical, since there is no reduce there) please beat me with a better one-liner, more readable! ;)

    Read the article

  • How do I simulate "Is In" using Linq2Sql

    - by flipdoubt
    I often find myself with a list of disconnected Linq2Sql objects or keys that I need to re-select from a Linq2Sql data-context to update or delete in the database. If this were SQL, I would use IS IN in the SQL WHERE clause, but I am stuck with what to do in Linq2Sql. Here is a sample of what I would like to write: public void MarkValidated(IList<int> idsToValidate) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate).ToList(); foreach(var item in items) item.Validated = DateTime.Now; _Db.SubmitChanges(); } // Disposes of DataContext } Or: public void DeleteItems(IList<int> idsToDelete) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate); _Db.Items.DeleteAllOnSubmit(items); _Db.SubmitChanges(); } // Disposes of DataContext } Can I get this done in one trip to the database? If so, how? Is it possible to send all those ints to the database as a list of parameters and is that more efficient than doing a foreach over the list to select each item one at a time?

    Read the article

  • Get Last Friday of Month in Java

    - by Nick Klauer
    I am working on a project where the requirement is to have a date calculated as being the last Friday of a given month. I think I have a solution that only uses standard Java, but I was wondering if anyone knew of anything more concise or efficient. Below is what I tested with for this year: for (int month = 0; month < 13; month++) { GregorianCalendar d = new GregorianCalendar(); d.set(d.MONTH, month); System.out.println("Last Week of Month in " + d.getDisplayName(d.MONTH, Calendar.LONG, Locale.ENGLISH) + ": " + d.getLeastMaximum(d.WEEK_OF_MONTH)); d.set(d.DAY_OF_WEEK, d.FRIDAY); d.set(d.WEEK_OF_MONTH, d.getActualMaximum(d.WEEK_OF_MONTH)); while (d.get(d.MONTH) > month || d.get(d.MONTH) < month) { d.add(d.WEEK_OF_MONTH, -1); } Date dt = d.getTime(); System.out.println("Last Friday of Last Week in " + d.getDisplayName(d.MONTH, Calendar.LONG, Locale.ENGLISH) + ": " + dt.toString()); }

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Design Practice Retrieve Multiple Users - PHP

    - by pws5068
    Greetings all, I'm looking for a more efficient way to grab multiple users without too much code redundancy. I have a Users class, (here's a highly condensed representation) Class Users { function __construct() { ... ... } private static function getNew($id) { // This is all only example code $sql = "SELECT * FROM Users WHERE User_ID=?"; $user = new User(); $user->setID($id); .... return $user; } public static function getNewestUsers($count) { $sql = "SELECT * FROM Users ORDER BY Join_Date LIMIT ?"; $users = array(); // Prepared Statement Data Here while($stmt->fetch()) { $users[] = Users::getNew($id); ... } return $users; } // These have more sanity/security checks, demonstration only. function setID($id) { $this->id = $id; } function setName($name) { $this->name = $name; } ... } So clearly calling getNew() in a loop is inefficient, because it makes $count number of calls to the database each time. I would much rather select all users in a single SQL statement. The problem is that I have probably a dozen or so methods for getting arrays of users, so all of these would now need to know the names of the database columns which is repeating a lot of structure that could always change later. What are some other ways that I could approach this problem? My goals include efficiency, flexibility, scalability, and good design practice.

    Read the article

  • algorithm || method to write prog

    - by fatai
    I am one of the computer science student. My wonder is everyone solve problem with different or same method, but actually I dont know whether they use method or I dont know whether there are such common method to approach problem. All teacher give us problem which in simple form sometimes, but they dont introduce any approach or method(s) so that we can first choose method then apply that one to problem , afterward find solution then write code. I have found one method when I failed the course, More accurately, When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 But now, I wonder other method ; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • Calculating bounding box a certain distance away from a lat/long coordinate in Java

    - by Bryce Thomas
    Given a coordinate (lat, long), I am trying to calculate a square bounding box that is a given distance (e.g. 50km) away from the coordinate. So as input I have lat, long and distance and as output I would like two coordinates; one being the south-west (bottom-left) corner and one being the north-east (top-right) corner. I have seen a couple of answers on here that try to address this question in Python, but I am looking for a Java implementation in particular. Just to be clear, I intend on using the algorithm on Earth only and so I don't need to accommodate a variable radius. It doesn't have to be hugely accurate (+/-20% is fine) and it'll only be used to calculate bounding boxes over small distances (no more than 150km). So I'm happy to sacrifice some accuracy for an efficient algorithm. Any help is much appreciated. Edit: I should have been clearer, I really am after a square, not a circle. I understand that the distance between the center of a square and various points along the square's perimeter is not a constant value like it is with a circle. I guess what I mean is a square where if you draw a line from the center to any one of the four points on the perimeter that results in a line perpendicular to a side of the perimeter, then those 4 lines have the same length.

    Read the article

  • Divide a path into N sections using Java or PostgreSQL/PostGIS

    - by Guido
    Imagine a GPS tracking system that is following the position of several objects. The points are stored in a database (PostgreSQL + PostGIS). Each path is composed by a different number of points. That is the reason why, in order to compare a pair of paths, I need to divide every path in a set of 100 points. Do you know any PostGIS function that already implement this algorithm? I've not been able to find it. If not, I'd like to solve it using Java. In this case I'd like to know an efficient and easy to implement algorithm to divide a path into N points. The most simple example could be to divide this path into three points: position 1 : x=1, y=2 position 2 : x=1, y=3 And the result should be: position 1 : x=1, y=2 (starting point) position 2 : x=5, y=2.5 position 3 : x=9, y=3 (end point) Edit: By 'compare a pair of paths' I mean to calculate the distance between two paths. I plan to divide each path in 100 points, and sum the euclidean distance between each one of these points as the distance between the two paths.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • HTML Calendar form and input arrays

    - by Christopher Ickes
    Hello. Looking for the best practice here... Have a form that consists of a calendar. Each day of the calendar has 2 text input fields - customer and check-in. What would be the best & most efficient way to send this form to PHP for processing? <form action="post"> <div class="day"> Day 1<br /> <label for="customer['.$current['date'].']">Customer</label> <input type="text" name="customer['.$current['date'].']" value="" size="20" /> <label for="check-in['.$current['date'].']">Check-In</label> <input type="text" name="check-in['.$current['date'].']" value="" size="20" /> <input type="submit" name="submit" value="Update" /> </day> <div class="day"> Day 2<br /> <label for="customer['.$current['date'].']">Customer</label> <input type="text" name="customer['.$current['date'].']" value="" size="20" /> <label for="check-in['.$current['date'].']">Check-In</label> <input type="text" name="check-in['.$current['date'].']" value="" size="20" /> <input type="submit" name="submit" value="Update" /> </day> </form> Is my current setup good? I feel there has to be a better option. My concern involves processing a whole year at once (which can happen) and adding additional text input fields.

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • Storing Arbitrary Contact Information in Ruby on Rails

    - by Anthony Chivetta
    Hi, I am currently working on a Ruby on Rails app which will function in some ways like a site-specific social networking site. As part of this, each user on the site will have a profile where they can fill in their contact information (phone numbers, addresses, email addresses, employer, etc.). A simple solution to modeling this would be to have a database column per piece of information I allow users to enter. However, this seems arbitrary and limited. Further, to support allowing users to enter as many phone numbers as they would like requires the addition of another database table and joins. It seems to me that a better solution would be to serialize all the contact information entered by a user into a single field in their row. Since I will never be conditioning a SQL query on this information, such a solution wouldn't be any less efficient. Ideally, I would like to use a vCard as my serialization format. vCards are the standard solution to storing contact information across the web, and reusing tested solutions is a Good Thing. Alternative serialization formats would include simply marshaling a ruby hash, or YAML. Regardless of serialization format, supporting the reading and updating of this information in a rails-like way seems to be a major implementation challenge. So, here's the question: Has anyone seen this approach used in a rails application? Are there any rails plugins or gems that make such a system easy to implement? Ideally what I would like is an acts_as_vcard to add to my model object that would handle editing the vcard for me and saving it back to the database.

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • Using ARIMA to model and forecast stock prices using user-friendly stats program

    - by Brian
    Hi people, Can anyone please offer some insight into this for me? I'm coming from a functional magnetic resonance imaging research background where I analyzed a lot of time series data, and I'd like to analyze the time series of stock prices (or returns) by: 1) modeling a successful stock in a particular market sector and then cross-correlating the time series of this historically successful stock with that of other newer stocks to look for significant relationships; 2) model a stock's price time series and use forecasting (e.g., exponential smoothing) to predict future values of it. I'd like to use non-linear modeling methods (ARIMA and ARCH) to do this. Several questions: How often do ARIMA and ARCH modeling methods (given that the individual who implements them does so accurately) actually fit the stock time series data they target, and what is the optimal fit I can expect? Is the extent to which this model fits the data commensurate with the extent to which it predicts this stock time series' future values? Rather than randomly selecting stocks to compare or model, if profit is my goal, what is an efficient approach, if any, to selecting the stocks I'm going to analyze? Which stats program is the most user-friendly for this? Any thoughts on this would be great and would go a long way for me. Thanks, Brian

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >