Search Results

Search found 11734 results on 470 pages for 'refer to itself'.

Page 430/470 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Recursion with an Array; can't get the right value to return

    - by Matt
    Recursive Solution: Not working! Explanation: An integer, time, is passed into the function. It's then used to provide an end to the FOR statement (counter<time). The IF section (time == 0) provides a base case where the recursion should terminate, returning 0. The ELSE section is where the recursive call occurs: total is a private variable defined in the header file, elsewhere. It's initialized to 0 in a constructor, elsewhere. The function calls itself, recursively, adding productsAndSales[time-1][0] to total, again, and again, until the base call. Then the total is returned, and printed out later. Well, that's what I hoped for anyway. What I imagined would happen is that I would add up all the values in this one column of the array and the value would get returned, and printed out. Instead if returns 0. If I set the IF section to "return 1", I noticed that it returns powers of 2, for whatever value time is. EG: Time = 3, it returns 2*2 + 1. If time = 5, it returns 2*2*2*2 + 1. I don't understand why it's not returning the value I'm expecting. int CompanySales::calcTotals( int time ) { cout << setw( 4 ); if ( time == 0 ) { return 0; } else { return total += calcTotals( productsAndSales[ time-1 ][ 0 ]); } } Iterative Solution: Working! Explanation: An integer, time, is passed into the function. It's then used to provide an end to the FOR statement (counter<time). The FOR statement cycles through an array, adding all of the values in one column together. The value is then returned (and elsewhere in the program, printed out). Works perfectly. int CompanySales::calcTotals( int time ) { int total = 0; cout << setw( 4 ); for ( int counter = 0; counter < time; counter++ ) { total += productsAndSales[counter][0]; } return total0; }

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • A continued saga of C# interoprability with unmanaged C++

    - by Gilad
    After a day of banging my head against the wall both literally and metaphorically, I plead for help: I have an unmanaged C++ project, which is compiled as a DLL. Let's call it CPP Project. It currently works in an unmanaged environment. In addition, I have created a WPF project, that shall be called WPF Project. This project is a simple and currently almost empty project. It contains a single window and I want it to use code from Project 1. For that, I have created a CLR C++ project, which shall be called Interop Project and is also compiled as a DLL. For simplicity I will attach some basic testing code I have boiled down to the basics. CPP Project has the following two testing files: tester.h #pragma once extern "C" class __declspec(dllexport) NativeTester { public: void NativeTest(); }; tester.cpp #include "tester.h" void NativeTester::NativeTest() { int i = 0; } Interop Project has the following file: InteropLib.h #pragma once #include <tester.h> using namespace System; namespace InteropLib { public ref class InteropProject { public: static void Test() { NativeTester nativeTester; nativeTester.NativeTest(); } }; } Lastly, WPF Project has a single window refrencing Interop Project: MainWindow.xaml.cs using System; using System.Windows; using InteropLib; namespace AppGUI { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); InteropProject.Test(); } } } And the XAML itself has an empty window (default created). Once I am trying to run the WPF project, I get the following error: System.Windows.Markup.XamlParseException: 'The invocation of the constructor on type 'AppGUI.MainWindow' that matches the specified binding constraints threw an exception.' Line number '3' and line position '9'. --- System.IO.FileNotFoundException: Could not load file or assembly 'InteropLib.dll' or one of its dependencies. The specified module could not be found. at AppGUI.MainWindow..ctor() Interestingly enough, if I do not export the class from CPP Project, I do not get this error. Say, if i change tester.h to: #pragma once class NativeTester { public: void NativeTest() { int i = 0; } }; However, in this case I cannot use my more complex classes. If I move my implementation to a cpp file like before, I get unresolved linkage errors due to my not exporting my code. The C++ code I want to actually use is large and has many classes and is object oriented, so I can't just move all my implementation to the h files. Please help me understand this horrific error I've been trying resolve without success. Thanks.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • Recommendations for a free GIS library supporting raster images

    - by gspr
    Hi. I'm quite new to the whole field of GIS, and I'm about to make a small program that essentially overlays GPS tracks on a map together with some other annotations. I primarily need to allow scanned (thus raster) maps (although it would be nice to support proper map formats and something like OpenStreetmap in the long run). My first exploratory program uses Qt's graphics view framework and overlays the GPS points by simply projecting them onto the tangent plane to the WGS84 ellipsoid at a calibration point. This gives half-decent accuracy, and actually looks good. But then I started wondering. To get the accuracy I need (i.e. remove the "half" in "half-decent"), I have to correct for the map projection. While the math is not a problem in itself, supporting many map projection feels like needless work. Even though a few projections would probably be enough, I started thinking about just using something like the PROJ.4 library to do my projections. But then, why not take it all the way? Perhaps I might aswell use a full-blown map library such as Mapnik (edit: Quantum GIS also looks very nice), which will probably pay off when I start to want even more fancy annotations or some other symptom of featuritis. So, finally, to the question: What would you do? Would you use a full-blown map library? If so, which one? Again, it's important that it supports using (and zooming in and out with) raster maps and has pretty overlay features. Or would you just keep it simple, and go with Qt's own graphics view framework together with something like PROJ.4 to handle the map projections? I appreciate any feedback! Some technicalities: I'm writing in C++ with a Qt-based GUI, so I'd prefer something that plays relatively nicely with those. Also, the library must be free software (as in FOSS), and at least decently cross-platform (GNU/Linux, Windows and Mac, at least). Edit: OK, it seems I didn't do quite enough research before asking this question. Both Quantum GIS and Mapnik seem very well suited for my purpose. The former especially so since it's based on Qt.

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • Need help with auto-scaffolding template in ASP.NET MVC

    - by DanM
    I'm trying to write an auto-scaffolder for Index views. I'd like to be able to pass in a collection of models or view-models (e.g., IQueryable<MyViewModel>) and get back an HTML table that uses the DisplayName attribute for the headings (th elements) and Html.Display(propertyName) for the cells (td elements). Each row should correspond to one item in the collection. Here's what I have so far: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; // Should be generic! var properties = items.First().GetMetadata().Properties .Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm)); %> <table> <tr> <% foreach(var property in properties) { %> <th> <%= property.DisplayName %> </th> <% } %> </tr> <% foreach(var item in items) { %> <tr> <% foreach(var property in properties) { %> <td> <%= Html.Display(property.DisplayName) %> // This doesn't work! </td> <% } %> </tr> <% } %> </table> Two problems with this: I'd like it to be generic. So, I'd like to replace var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; with var items = (IQueryable<T>)Model; or something to that effect. The <td> elements are not working because the Html in <%= Html.Display(property.DisplayName) %> contains the model for the view, which is a collection of items, not the item itself. Somehow, I need to obtain an HtmlHelper object whose Model property is the current item, but I'm not sure how to do that. How do I solve these two problems?

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • Class hierarchy problem (with generic's variance!)

    - by devoured elysium
    The problem: class StatesChain : IState, IHasStateList { private TasksChain tasks = new TasksChain(); ... public IList<IState> States { get { return _taskChain.Tasks; } } IList<ITask> IHasTasksCollection.Tasks { get { return _taskChain.Tasks; } <-- ERROR! You can't do this in C#! I want to return an IList<ITask> from an IList<IStates>. } } Assuming the IList returned will be read-only, I know that what I'm trying to achieve is safe (or is it not?). Is there any way I can accomplish what I'm trying? I wouldn't want to try to implement myself the TasksChain algorithm (again!), as it would be error prone and would lead to code duplication. Maybe I could just define an abstract Chain and then implement both TasksChain and StatesChain from there? Or maybe implementing a Chain<T> class? How would you approach this situation? The Details: I have defined an ITask interface: public interface ITask { bool Run(); ITask FailureTask { get; } } and a IState interface that inherits from ITask: public interface IState : ITask { IState FailureState { get; } } I have also defined an IHasTasksList interface: interface IHasTasksList { List<Tasks> Tasks { get; } } and an IHasStatesList: interface IHasTasksList { List<Tasks> States { get; } } Now, I have defined a TasksChain, that is a class that has some code logic that will manipulate a chain of tasks (beware that TasksChain is itself a kind of ITask!): class TasksChain : ITask, IHasTasksList { IList<ITask> tasks = new List<ITask>(); ... public List<ITask> Tasks { get { return _tasks; } } ... } I am implementing a State the following way: public class State : IState { private readonly TaskChain _taskChain = new TaskChain(); public State(Precondition precondition, Execution execution) { _taskChain.Tasks.Add(precondition); _taskChain.Tasks.Add(execution); } public bool Run() { return _taskChain.Run(); } public IState FailureState { get { return (IState)_taskChain.Tasks[0].FailureTask; } } ITask ITask.FailureTask { get { return FailureState; } } } which, as you can see, makes use of explicit interface implementations to "hide" FailureTask and instead show FailureState property. The problem comes from the fact that I also want to define a StatesChain, that inherits both from IState and IHasStateList (and that also imples ITask and IHasTaskList, implemented as explicit interfaces) and I want it to also hide IHasTaskList's Tasks and only show IHasStateList's States. (What is contained in "The problem" section should really be after this, but I thought puting it first would be way more reader friendly). (pff..long text) Thanks!

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • Are there programs that iteratively write new programs?

    - by chris
    For about a year I have been thinking about writing a program that writes programs. This would primarily be a playful exercise that might teach me some new concepts. My inspiration came from negentropy and the ability for order to emerge from chaos and new chaos to arise out of order in infinite succession. To be more specific, the program would start by writing a short random string. If the string compiles the programs will log it for later comparison. If the string does not compile the program will try to rewrite it until it does compile. As more strings (mini 'useless' programs) are logged they can be parsed for similarities and used to generate a grammar. This grammar can then be drawn on to write more strings that have a higher probability of compilation than purely random strings. This is obviously more than a little silly, but I thought it would be fun to try and grow a program like this. And as a byproduct I get a bunch of unique programs that I can visualize and call art. I'll probably write this in Ruby due to its simple syntax and dynamic compilation and then I will visualize in processing using ruby-processing. What I would like to know is: Is there a name for this type of programming? What currently exists in this field? Who are the primary contributors? BONUS! - In what ways can I procedurally assign value to output programs beyond compiles(y/n)? I may want to extend the functionality of this program to generate a program based on parameters, but I want the program to define those parameters through running the programs that compile and assigning meaning to the programs output. This question is probably more involved than reasonable for a bonus, but if you can think of a simple way to get something like this done in less than 23 lines or one hyperlink, please toss it into your response. I know that this is not quite meta-programming and from the little I know of AI and generative algorithms they are usually more goal oriented than what I am thinking. What would be optimal is a program that continually rewrites and improves itself so I don't have to ^_^

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • How to measure a canvas that has auto height and with

    - by Wymmeroo
    Hi Folks, I'm a beginner in silverlight so i hope i can get an answer that brings me some more light in the measure process of silverlight. I found an interessting flap out control from silverlight slide control and now I try to use it in my project. So that the slide out is working proper, I have to place the user control on a canvas. The user control then uses for itself the height of its content. I just wanna change that behavior so that the height is set to the available space from the parent canvas. You see the uxBorder where the height is set. How can I measure the actual height and set it to the border? I tried it with Height={Binding ElementName=notificationCanvas, Path=ActualHeight} but this dependency property has no callback, so the actualHeight is never set. What I want to achieve is a usercontrol like the tweetboard per example on Jesse Liberty's blog Sorry for my English writing, I hope you understand my question. <Canvas x:Name="notificationCanvas" Background="Red"> <SlideEffectEx:SimpleSlideControl GripWidth="20" GripTitle="Task" GripHeight="100"> <Border x:Name="uxBorder" BorderThickness="2" CornerRadius="5" BorderBrush="DarkGray" Background="DarkGray" Padding="5" Width="300" Height="700" > <StackPanel> <TextBlock Text="Tasks"></TextBlock> <Button x:Name="btn1" Margin="5" Content="{Binding ElementName=MainBorder, Path=Height}"></Button> <Button x:Name="btn2" Margin="5" Content="Second Button"></Button> <Button x:Name="btn3" Margin="5" Content="Third Button"></Button> <Button x:Name="btn1_Copy" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy1" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy2" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy3" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy4" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy5" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy6" Margin="5" Content="First Button"/> </StackPanel> </Border> </SlideEffectEx:SimpleSlideControl>

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • getSymbols and using lapply, Cl, and merge to extract close prices

    - by algotr8der
    I've been messing around with this for some time. I recently started using the quantmod package to perform analytics on stock prices. I have a ticker vector that looks like the following: > tickers [1] "SPY" "DIA" "IWM" "SMH" "OIH" "XLY" "XLP" "XLE" "XLI" "XLB" "XLK" "XLU" "XLV" [14] "QQQ" > str(tickers) chr [1:14] "SPY" "DIA" "IWM" "SMH" "OIH" "XLY" "XLP" "XLE" ... I wrote a function called myX to use in a lapply call to save prices for every stock in the vector tickers. It has the following code: myX <- function(tickers, start, end) { require(quantmod) getSymbols(tickers, from=start, to=end) } I call lapply by itself library(quantmod) lapply(tickers,myX,start="2001-03-01", end="2011-03-11") > lapply(tickers,myX,start="2001-03-01", end="2011-03-11") [[1]] [1] "SPY" [[2]] [1] "DIA" [[3]] [1] "IWM" [[4]] [1] "SMH" [[5]] [1] "OIH" [[6]] [1] "XLY" [[7]] [1] "XLP" [[8]] [1] "XLE" [[9]] [1] "XLI" [[10]] [1] "XLB" [[11]] [1] "XLK" [[12]] [1] "XLU" [[13]] [1] "XLV" [[14]] [1] "QQQ" That works fine. Now I want to merge the Close prices for every stock into an object that looks like # BCSI.Close WBSN.Close NTAP.Close FFIV.Close SU.Close # 2011-01-03 30.50 20.36 57.41 134.33 38.82 # 2011-01-04 30.24 19.82 57.38 132.07 38.03 # 2011-01-05 31.36 19.90 57.87 137.29 38.40 # 2011-01-06 32.04 19.79 57.49 138.07 37.23 # 2011-01-07 31.95 19.77 57.20 138.35 37.30 # 2011-01-10 31.55 19.76 58.22 142.69 37.04 Someone recommended I try something like the following: ClosePrices <- do.call(merge, lapply(tickers, function(x) Cl(get(x)))) However I tried various combinations of this without any success. First I tried just calling lapply with Cl(x) >lapply(tickers,myX,start="2001-03-01", end="2011-03-11") Cl(myX))) > lapply(tickers,myX,start="2001-03-01", end="2011-03-11") Cl(x))) Error: unexpected symbol in "lapply(tickers,myX,start="2001-03-01", end="2011-03-11") Cl" > > lapply(tickers,myX(x),start="2001-03-01", end="2011-03-11") Cl(x))) Error: unexpected symbol in "lapply(tickers,myX(x),start="2001-03-01", end="2011-03-11") Cl" > > lapply(tickers,myX(start="2001-03-01", end="2011-03-11") Cl(x) Error: unexpected symbol in "lapply(tickers,myX(start="2001-03-01", end="2011-03-11") Cl" > lapply(tickers,myX(start="2001-03-01", end="2011-03-11") Cl(x)) Error: unexpected symbol in "lapply(tickers,myX(start="2001-03-01", end="2011-03-11") Cl" > Any guidance would be kindly appreciated.

    Read the article

  • Any tips on reducing wxWidgets application code size?

    - by Billy ONeal
    I have written a minimal wxWidgets application: stdafx.h #define wxNO_REGEX_LIB #define wxNO_XML_LIB #define wxNO_NET_LIB #define wxNO_EXPAT_LIB #define wxNO_JPEG_LIB #define wxNO_PNG_LIB #define wxNO_TIFF_LIB #define wxNO_ZLIB_LIB #define wxNO_ADV_LIB #define wxNO_HTML_LIB #define wxNO_GL_LIB #define wxNO_QA_LIB #define wxNO_XRC_LIB #define wxNO_AUI_LIB #define wxNO_PROPGRID_LIB #define wxNO_RIBBON_LIB #define wxNO_RICHTEXT_LIB #define wxNO_MEDIA_LIB #define wxNO_STC_LIB #include <wx/wxprec.h> Minimal.cpp #include "stdafx.h" #include <memory> #include <wx/wx.h> class Minimal : public wxApp { public: virtual bool OnInit(); }; IMPLEMENT_APP(Minimal) DECLARE_APP(Minimal) class MinimalFrame : public wxFrame { DECLARE_EVENT_TABLE() public: MinimalFrame(const wxString& title); void OnQuit(wxCommandEvent& e); void OnAbout(wxCommandEvent& e); }; BEGIN_EVENT_TABLE(MinimalFrame, wxFrame) EVT_MENU(wxID_ABOUT, MinimalFrame::OnAbout) EVT_MENU(wxID_EXIT, MinimalFrame::OnQuit) END_EVENT_TABLE() MinimalFrame::MinimalFrame(const wxString& title) : wxFrame(0, wxID_ANY, title) { std::auto_ptr<wxMenu> fileMenu(new wxMenu); fileMenu->Append(wxID_EXIT, L"E&xit\tAlt-X", L"Terminate the Minimal Example."); std::auto_ptr<wxMenu> helpMenu(new wxMenu); helpMenu->Append(wxID_ABOUT, L"&About\tF1", L"Show the about dialog box."); std::auto_ptr<wxMenuBar> bar(new wxMenuBar); bar->Append(fileMenu.get(), L"&File"); fileMenu.release(); bar->Append(helpMenu.get(), L"&Help"); helpMenu.release(); SetMenuBar(bar.get()); bar.release(); CreateStatusBar(2); SetStatusText(L"Welcome to wxWidgets!"); } void MinimalFrame::OnAbout(wxCommandEvent& e) { wxMessageBox(L"Some text about me!", L"About", wxOK, this); } void MinimalFrame::OnQuit(wxCommandEvent& e) { Close(); } bool Minimal::OnInit() { std::auto_ptr<MinimalFrame> mainFrame( new MinimalFrame(L"Minimal wxWidgets Application")); mainFrame->Show(); mainFrame.release(); return true; } This minimal program weighs in at 2.4MB! (Executable compression drops this to half a MB or so but that's still HUGE!) (I must statically link because this application needs to be single-binary-xcopy-deployed, so both the C runtime and wxWidgets itself are set for static linking) Any tips on cutting this down? (I'm using Microsoft Visual Studio 2010)

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • Wait 300ms and then slide down menu using jQuery

    - by Derfder
    I tried this js code: <script type="text/javascript"> $(document).ready( function() { var timer; $('.top-menu-first-ul li').hover(function(){ if(timer) { clearTimeout(timer); timer = null } timer = setTimeout(function() { $(this).find('ul').slideToggle(100); }, 500) }, // mouse out }); }); </script> with this html: <ul class="top-menu-first-ul"> <li> <a href="http://google.com">Google</a> <ul class="top-menu-second-ul"> <li><a href="http://translate.google.com">Google Translate</a></li> <li><a href="http://images.google.com">Google Images</a></li> <li><a href="http://gmail.google.com">Gmail</a></li> <li><a href="http://plus.google.com">Google Plus</a></li> </ul> </li> <li> <a href="http://facebook.com">Facebook</a> </li> <li> <a href="http://twitter.com">Twitter</a> </li> </ul> but it is not working. I would like to show the submenu when hovering over e.g. Google link and other will appear after 300ms. So, I need to prevent loading submenu when the user just quickly hover over it and not stay on hover link for at least 300ms. Then I need to stop executing code (or slide up) when he leaves the link. Because if he go over and back and over and back a copule of times he stop doing hovering anfd the code is still executing itself how many times he had hover over the link. I need somehow prevent this to happen. EDIT: I need to solve this without plugins like hoverIntent etc. if possible, just plain javascript and jQuery

    Read the article

  • Passing array into constructor to use on JList

    - by OVERTONE
    I know the title sound confusing and thats because it is. its a bit long so try too stay with me. this is the layout i have my code designed variables constructor methods. im trying too fill a Jlist full on names. i want too get those names using a method. so here goes. in my variables i have my JList. its called contactNames; i also have an array which stores 5 strings which are the contacts names; heres the code for that anyway String contact1; String contact2; String contact3; String contact4; String contact5; String[] contactListNames; JList contactList; simple enough. then in my constructor i have the Jlist defined to fill itself with the contents of the array fillContactList(); JList contactList = new JList(contactListNames); that method fillContactList() is coming up shortly. notice i dont have the array defined in the constructor. so heres my first question. can i do that? define the array to contain something in te constructor rather than filling it fromt the array. now heres where stuff gets balls up. ive created three different methods all of which havent worked. basically im trying to fill the array with all of them. this is the simplest one. it doesnt set the Jlist, it doesnt do anything compilicated. all it trys too do is fill the array one bit at a time public void fillContactList() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } unfortunately this doesnt qwork. i get the print out to fill it at index 0; i input something and i get a nice big stack trace starting at contactListNames[i] = contact; so my two questions in short are how i define an array in a constructor. and why cant i fill the array from that method. ************************888 **************************888 stack trace by request please fill the list at index 0 overtone java.lang.NullPointerException please fill the list at index 1 at project.AdminMessages.fillContactList(AdminMessages.java:408) at project.AdminMessages.<init>(AdminMessages.java:88) at project.AdminUser.createAdminMessages(AdminUser.java:32) at project.AdminUser.<init>(AdminUser.java:18) at project.AdminUser.main(AdminUser.java:47) it was a null poiinter exception

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • What benefits are there to storing Javascript in external files vs in the <head>?

    - by RenderIn
    I have an Ajax-enabled CRUD application. If I display a record from my database it shows that record's values for each column, including its primary key. For the Ajax actions tied to buttons on the page I am able to set up their calls by printing the ID directly into their onclick functions when rendering the HTML server-side. For example, to save changes to the record I may have a button as follows, with '123' being the primary key of the record. <button type="button" onclick="saveRecord('123')">Save</button> Sometimes I have pages with Javascript generating HTML and Javascript. In some of these cases the primary key is not naturally available at that place in the code. In these cases I took a shortcut and generate buttons like so, taking the primary key from a place it happens to be displayed on screen for visual consumption: ... <td>Primary Key: </td> <td><span id="PRIM_KEY">123</span></td> ... <button type="button" onclick="saveRecord(jQuery('#PRIM_KEY').text())">DoSomething</button> This definitely works, but it seems wrong to drive database queries based on the value of text whose purpose was user consumption rather than method consumption. I could solve this by adding a series of additional parameters to various methods to usher the primary key along until it is eventually needed, but that also seems clunky. The most natural way for me to solve this problem would be to simply situate all the Javascript which currently lives in external files, in the <head> of the page. In that way I could generate custom Javascript methods without having to pass around as many parameters. Other than readability, I'm struggling to see what benefit there is to storing Javascript externally. It seems like it makes the already weak marriage between HTML/DOM and Javascript all the more distant. I've seen some people suggest that I leave the Javascript external, but do set various "custom" variables on the page itself, for example, in PHP: <script type="text/javascript"> var primaryKey = <?php print $primaryKey; ?>; </script> <script type="text/javascript" src="my-external-js-file-depending-on-primaryKey-being-set.js"></script> How is this any better than just putting all the Javascript on the page in the first place? There HTML and Javascript are still strongly dependent on each other.

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Are there any drawbacks to class-based Javascript injection?

    - by jonathanconway
    A phenomena I'm seeing more and more of is Javascript code that is tied to a particular element on a particular page, rather than being tied to kinds of elements or UI patterns. For example, say we had a couple of animated menus on a page: <ul id="top-navigation"> ... </ul> <!-- ... --> <ul id="product-list"> ... </ul> These two menus might exist on the same page or on different pages, and some pages mightn't have any menus. I'll often see Javascript code like this (for these examples, I'm using jQuery): $(document).ready(function() { $('ul#top-navigation').dropdownMenu(); $('ul#product-selector').dropdownMenu(); }); Notice the problem? The Javascript is tightly coupled to particular instances of a UI pattern rather than the UI pattern itself. Now wouldn't it be so much simpler (and cleaner) to do this instead? - $(document).ready(function() { $('ul.dropdown-menu').dropdownMenu(); }); Then we can put the 'dropdown-menu' class on our lists like so: <ul id="top-navigation" class="dropdown-menu"> ... </ul> <!-- ... --> <ul id="product-list" class="dropdown-menu"> ... </ul> This way of doing things would have the following benefits: Simpler Javascript - we only need to attach once to the class. We avoid looking for specific instances that mightn't exist on a given page. If we remove an element, we don't need to hunt through the Javascript to find the attach code for that element. I believe techniques similar to this were pioneered by certain articles on alistapart.com. I'm amazed these simple techniques still haven't gained widespread adoption, and I still see 'best-practice' code-samples and Javascript frameworks referring directly to UI instances rather than UI patterns. Is there any reason for this? Is there some big disadvantage to the technique I just described that I'm unaware of?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >