Search Results

Search found 4955 results on 199 pages for 'range'.

Page 163/199 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • What objects would get defined in a Bridge scoring app (Javascript)

    - by Alex Mcp
    I'm writing a Bridge (card game) scoring application as practice in javascript, and am looking for suggestions on how to set up my objects. I'm pretty new to OO in general, and would love a survey of how and why people would structure a program to do this (the "survey" begets the CW mark. Additionally, I'll happily close this if it's out of range of typical SO discussion). The platform is going to be a web-app for webkit on the iPhone, so local storage is an option. My basic structure is like this: var Team = { player1: , player2: , vulnerable: , //this is a flag for whether or //not you've lost a game yet, affects scoring scoreAboveLine: , scoreBelowLine: gamesWon: }; var Game = { init: ,//function to get old scores and teams from DB currentBid: , score: , //function to accept whether bid was made and apply to Teams{} save: //auto-run that is called after each game to commit to DB }; So basically I'll instantiate two teams, and then run loops of game.currentBid() and game.score. Functionally the scoring is working fine, but I'm wondering if this is how someone else would choose to break down the scoring of Bridge, and if there are any oversights I've made with regards to OO-ness and how I've abstracted the game. Thanks!

    Read the article

  • A dynamic array of class "landmark", inside another single class "landmarks"

    - by pinnacler
    I'm working on a robot localization simulator and I created a class called "landmark". The end result is going to be a robot that is always centered and always faces the top of the screen. As it turns, the birds eye view map will rotate around the robot. To accomplish this, I'm assuming I can rotate one class and have all elements inside rotate as well. So, the landmark class has properties x,y, label, and radius. This is suppose to simulate a tree location in a forest. To test everything, I need "forest data," and I wrote a script to generate 100 trees in a 100m x 100m area. The script automatically generates values within an acceptable range for x,y, radius. The generated data is stored in an object called tempForest and is 100x3. Ideally, I want to create a class called "landmarks" (plural) that has 100 landmark instances inside. How would I instantiate 100 instances of landmark in one instance of landmarks using that randomly generated data? Ideally, I'd just type treeBeacons = landmarks(); and it would randomly populate 100 (user definable, set in config file) instances with x, y, radius data. I'm not sure how to deal with a dynamic array of class "Landmark", inside another single class "landmarks." Any ideas?

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • What does "active directory integration" mean in your .NET app?

    - by flipdoubt
    Our marketing department comes back with "active directory integration" being a key customer request, but our company does not seem to have the attention span to (1) decide on what functional changes we want to make toward this end, (2) interview a broad range of customer to identify the most requested functional changes, and (3) still have this be the "hot potato" issue next week. To help me get beyond the broad topic of "active directory integration," what does it mean in your .NET app, both ASP.NET and WinForms? Here are some sample changes I have to consider: When creating and managing users in your app, are administrators presented with a list of all AD users or just a group of AD users? When creating new security groups within your app (we call them Departments, like "Human Resources"), should this create new AD groups? Do administrators assign users to security groups within your app or outside via AD? Does it matter? Is the user signed on to your app by virtue of being signed on to Windows? If not, do you track users with your own user table and some kind of foreign key into AD? What foreign key do you use to link app users to AD users? Do you have to prove your login process protects user passwords? What foreign key do you use to link app security groups to AD security groups? If you have a WinForms component to your app (we have both ASP.NET and WinForms), do you use the Membership Provider in your WinForms app? Currently, our Membership and Role management predates the framework's version, so we do not use the Membership Provider. Am I missing any other areas of functional changes? Followup question Do apps that support "active directory integration" have the ability to authenticate users against more than one domain? Not that one user would authenticate to more than one domain but that different users of the same system would authenticate against different domains.

    Read the article

  • Best way to enforce inter-table constraints inside database

    - by FerranB
    I looking for the best way to check for inter-table constraints an step forward of foreing keys. For instance, to check if a date child record value is between a range date on two parent rows columns. For instance: Parent table ID DATE_MIN DATE_MAX ----- ---------- ---------- 1 01/01/2009 01/03/2009 ... Child table PARENT_ID DATE ---------- ---------- 1 01/02/2009 1 01/12/2009 <--- HAVE TO FAIL! ... I see two approaches: Create materialized views on-commit as shown in this article (or other equivalent on other RDBMS). Use stored-procedures and triggers. Any other approach? Which is the best option? UPDATE: The motivation of this question is not about "putting the constraints on database or on application". I think this is a tired question and anyone does the way she loves. And, I'm sorry for detractors, I'm developing with constraints on database. From here, the question is "which is the best option to manage inter-table constraints on database?". I'm added "inside database" on the question title. UPDATE 2: Some one added the "oracle" tag. Of course materialized views are oracle-tools but I'm interested on any option regardless it's on oracle or others RDBMSs.

    Read the article

  • MapKit custom annotations being added to map, but are not visible on the map

    - by culov
    I learned a lot from the screencast at http://icodeblog.com/2009/12/21/introduction-to-mapkit-in-iphone-os-3-0/ , and ive been trying to incorporate the way he creates a custom annotation into my iphone app. He is hardcoding annotations and adding them individually to the map, whereas i want to add them from my data source. So i have an array of objects with a lat and a lng (ive checked that the values are within range and ought to be appearing on the map) that i iterate through and do [mapView addAnnotation:truck] once this process is completed, i check the number of annotations on the map with [[mapView annotations] count] and its equal to the number it ought to be, so all the annotations are getting added onto the mapView, but for some reason I cant see any annotations in the simulator. I've compared my code with his in all the pertinent places many times, but nothing seems to stand out as being incorrect. Ive also reviewed my code for the last several hours trying to find a point where I do something wrong, but nothing is coming to mind. The images are named just as they are assigned in the custom AnnotationView, the loadAnnotation function is done properly, etc... i dont know what it could be. so i suppose if i could have the answer to one question it would be, what are possible causes for a mapView to contain several annotations, but to not show any on the map? Thanks

    Read the article

  • Does NHibernate LINQ support ToLower() in Where() clauses?

    - by Daniel T.
    I have an entity and its mapping: public class Test { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } public class TestMap : EntityMap<Test> { public TestMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Description); } } I'm trying to run a query on it (to grab it out of the database): var keyword = "test" // this is coming in from the user keyword = keyword.ToLower(); // convert it to all lower-case var results = session.Linq<Test> .Where(x => x.Name.ToLower().Contains(keyword)); results.Count(); // execute the query However, whenever I run this query, I get the following exception: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index Am I right when I say that, currently, Linq to NHibernate does not support ToLower()? And if so, is there an alternative that allows me to search for a string in the middle of another string that Linq to NHibernate is compatible with? For example, if the user searches for kap, I need it to match Kapiolani, Makapuu, and Lapkap.

    Read the article

  • Why does findstr not handle case properly (in some circumstances)?

    - by paxdiablo
    While writing some recent scripts in cmd.exe, I had a need to use findstr with regular expressions - customer required standard cmd.exe commands (no GnuWin32 nor Cygwin nor VBS nor Powershell). I just wanted to know if a variable contained any upper-case characters and attempted to use: > set myvar=abc > echo %myvar%|findstr /r "[A-Z]" abc > echo %errorlevel% 0 When %myvar% is set to abc, that actually outputs the string and sets errorlevel to 0, saying that a match was found. However, the full-list variant: > echo %myvar%|findstr /r "[ABCDEFGHIJKLMNOPQRSTUVWXYZ]" > echo %errorlevel% 1 does not output the line and it correctly sets errorlevel to 1. In addition: > echo %myvar%|findstr /r "^[A-Z]*$" > echo %errorlevel% 1 also works as expected. I'm obviously missing something here even if it's only the fact that findstr is somehow broken. Why does the first (range) regex not work in this case? And yet more weirdness: > echo %myvar%|findstr /r "[A-Z]" abc > echo %myvar%|findstr /r "[A-Z][A-Z]" abc > echo %myvar%|findstr /r "[A-Z][A-Z][A-Z]" > echo %myvar%|findstr /r "[A]" The last two above also does not output the string!!

    Read the article

  • Generate number sequences with LINQ

    - by tanascius
    I try to write a LINQ statement which returns me all possible combinations of numbers (I need this for a test and I was inspired by this article of Eric Lippert). The method's prototype I call looks like: IEnumerable<Collection<int>> AllSequences( int start, int end, int size ); The rules are: all returned collections have a length of size number values within a collection have to increase every number between start and end should be used So calling the AllSequences( 1, 5, 3 ) should result in 10 collections, each of size 3: 1 2 3 1 2 4 1 2 5 1 3 4 1 3 5 1 4 5 2 3 4 2 3 5 2 4 5 3 4 5 Now, somehow I'd really like to see a pure LINQ solution. I am able to write a non LINQ solution on my own, so please put no effort into a solution without LINQ. My tries so far ended at a point where I have to join a number with the result of a recursive call of my method - something like: return from i in Enumerable.Range( start, end - size + 1 ) select BuildCollection(i, AllSequences( i, end, size -1)); But I can't manage it to implement BuildCollection() on a LINQ base - or even skip this method call. Can you help me here?

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • How to make a custom NSFormatter work correctly on Snow Leopard?

    - by Nathan
    I have a custom NSFormatter attached to several NSTextFields who's only purpose is to uppercase the characters as they are typed into field. The entire code for my formatter is included below. The stringForObjectValue() and getObjectValue() implementations are no-ops and taken pretty much directly out of Apple's documentation. I'm using the isPartialStringValid() method to return an uppercase version of the string. This code works correctly in 10.4 and 10.5. When I run it on 10.6, I get "strange" behaviour where text fields aren't always render the characters that are typed and sometimes are just displaying garbage. I've tried enabling NSZombie detection and running under Instruments but nothing was reported. I see errors like the following in "Console": HIToolbox: ignoring exception '*** -[NSCFString replaceCharactersInRange:withString:]: Range or index out of bounds' that raised inside Carbon event dispatch ( 0 CoreFoundation 0x917ca58a __raiseError + 410 1 libobjc.A.dylib 0x94581f49 objc_exception_throw + 56 2 CoreFoundation 0x917ca2b8 +[NSException raise:format:arguments:] + 136 3 CoreFoundation 0x917ca22a +[NSException raise:format:] + 58 4 Foundation 0x9140f528 mutateError + 218 5 AppKit 0x9563803a -[NSCell textView:shouldChangeTextInRange:replacementString:] + 852 6 AppKit 0x95636cf1 -[NSTextView(NSSharing) shouldChangeTextInRanges:replacementStrings:] + 1276 7 AppKit 0x95635704 -[NSTextView insertText:replacementRange:] + 667 8 AppKit 0x956333bb -[NSTextInputContext handleTSMEvent:] + 2657 9 AppKit 0x95632949 _NSTSMEventHandler + 209 10 HIToolbox 0x93379129 _ZL23DispatchEventToHandlersP14EventTargetRecP14OpaqueEventRefP14HandlerCallRec + 1567 11 HIToolbox 0x933783f0 _ZL30SendEventToEventTargetInternalP14OpaqueEventRefP20OpaqueEventTargetRefP14HandlerCallRec + 411 12 HIToolbox 0x9339aa81 SendEventToEventTarget + 52 13 HIToolbox 0x933fc952 SendTSMEvent + 82 14 HIToolbox 0x933fc2cf SendUnicodeTextAEToUnicodeDoc + 700 15 HIToolbox 0x933fbed9 TSMKeyEvent + 998 16 HIToolbox 0x933ecede TSMProcessRawKeyEvent + 2515 17 AppKit 0x95632228 -[NSTextInputContext handleEvent:] + 1453 18 AppKit 0x9562e088 -[NSView interpretKeyEvents:] + 209 19 AppKit 0x95631b45 -[NSTextView keyDown:] + 751 20 AppKit 0x95563194 -[NSWindow sendEvent:] + 5757 21 AppKit 0x9547bceb -[NSApplication sendEvent:] + 6431 22 AppKit 0x9540f6fb -[NSApplication run] + 917 23 AppKit 0x95407735 NSApplicationMain + 574 24 macsetup 0x00001f9f main + 24 25 macsetup 0x00001b75 start + 53 ) Can anybody shed some light on what is happening? Am I just using NSFormatter incorrectly? -(NSString*) stringForObjectValue:(id)object { if( ![object isKindOfClass: [ NSString class ] ] ) { return nil; } return [ NSString stringWithString: object ]; } -(BOOL)getObjectValue: (id*)object forString: string errorDescription:(NSString**)error { if( object ) { *object = [ NSString stringWithString: string ]; return YES; } return NO; } -(BOOL) isPartialStringValid: (NSString*) cStr newEditingString: (NSString**) nStr errorDescription: (NSString**) error { *nStr = [NSString stringWithString: [cStr uppercaseString]]; return NO; }

    Read the article

  • Catching 'Last Record' in Coldfusion for IE javascript bug

    - by Simon Hume
    I'm using ColdFusion to pull UK postcodes into an array for display on a Google Map. This happens dynamically from a SQL database, so the numbers can range from 1 to 100+ the script works great, however, in IE (groan) it decides to display one point way off line, over in California somewhere. I fixed this issue in a previous webapp, this was due to the comma between each array item still being present at the end. Works fine in Firefox, Safari etc, but not IE. But, that one was using a set 10 records, so was easy to fix. I just need a little if statement to wrap around my comma to hide it when it hits the last record. I can't seem to get it right. Any tips/suggestions? here is the line of code in question: var address = [<cfloop query="getApplicant"><cfif getApplicant.dbHomePostCode GT ""><cfoutput>'#getApplicant.dbHomePostCode#',</cfoutput></cfif> </cfloop>]; Hopefully someone can help with this rather simple request. I'm just having a bad day at the office!

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Planning management slots/sessions

    - by Glide
    I have a planning structure on two tables to store available slots by day, and sessions. A slot is defined by a range of time in the day. CREATE TABLE slot ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); Sessions can't overlap themselves and must be wrapped in a slot. CREATE TABLE session ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); I need to generate a list of available blocks of time of a certain duration, in order to create sessions. Example: INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; 2010-01-01 <##><####> <- Sessions ------------------------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 2010-01-02 <##########> <########> <- Sessions -------------------- ------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 I need to know which spaces of 1 hour I can use: +------------+-------+-------+ | date | start | end | +------------+-------+-------+ | 2010-01-01 | 13:00 | 14:00 | | 2010-01-01 | 14:00 | 15:00 | | 2010-01-01 | 15:00 | 16:00 | | 2010-01-01 | 16:00 | 17:00 | | 2010-01-01 | 17:00 | 18:00 | | 2010-01-01 | 18:00 | 19:00 | | 2010-01-02 | 10:00 | 11:00 | | 2010-01-02 | 11:00 | 12:00 | | 2010-01-02 | 16:00 | 17:00 | +------------+-------+-------+

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • REST API error return good practices

    - by Remus Rusanu
    I'm looking for guidance on good practices when it comes to return errors from a REST API. I'm working on a new API so I can take it any direction right now. My content type is XML at the moment, but I plan to support JSON in future. I am now adding some error cases, like for instance a client attempts to add a new resource but has exceeded his storage quota. I am already handling certain error cases with HTTP status codes (401 for authentication, 403 for authorization and 404 for plain bad request URIs). I looked over the blessed HTTP error codes but none of the 400-417 range seems right to report application specific errors. So at first I was tempted to return my application error with 200 OK and a specific XML payload (ie. Pay us more and you'll get the storage you need!) but I stopped to think about it and it seems to soapy (/shrug in horror). Besides it feels like I'm splitting the error responses into distinct cases, as some are http status code driven and other are content driven. So what is the SO crowd recommendation? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?

    Read the article

  • Python unicode problem

    - by Somebody still uses you MS-DOS
    I'm receiving some data from a ZODB (Zope Object Database). I receive a mybrains object. Then I do: o = mybrains.getObject() and I receive a "Person" object in my project. Then, I can do b = o.name and doing print b on my class I get: José Carlos and print b.name.__class__ <type 'unicode'> I have a lot of "Person" objects. They are added to a list. names = [o.nome, o1.nome, o2.nome] Then, I trying to create a text file with this data. delimiter = ';' all = delimiter.join(names) + '\n' No problem. Now, when I do a print all I have: José Carlos;Jonas;Natália Juan;John But when I try to create a file of it: f = open("/tmp/test.txt", "w") f.write(all) I get an error like this (the positions aren't exaclty the same, since I change the names) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 84: ordinal not in range(128) If I can print already with the "correct" form to display it, why I can't write a file with it? Which encode/decode method should I use to write a file with this data? I'm using Python 2.4.5 (can't upgrade it)

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • ArgumentOutOfRangeException when reading bytes from stream

    - by user345194
    I'm trying to read the response stream from an HttpWebResponse object. I know the length of the stream (_response.ContentLength) however I keep getting the following exception: Specified argument was out of the range of valid values. Parameter name: size While debugging, I noticed that at the time of the error, the values were as such: length = 15032 //the length of the stream as defined by _response.ContentLength bytesToRead = 7680 //the number of bytes in the stream that still need to be read bytesRead = 7680 //the number of bytes that have been read (offset) body.length = 15032 //the size of the byte[] the stream is being copied to The peculiar thing is that the bytesToRead and bytesRead variables are ALWAYS 7680, regardless of the size of the stream (contained in the length variable). Any ideas? Code: int length = (int)_response.ContentLength; byte[] body = null; if (length 0) { int bytesToRead = length; int bytesRead = 0; try { body = new byte[length]; using (Stream stream = _response.GetResponseStream()) { while (bytesToRead > 0) { // Read may return anything from 0 to length. int n = stream.Read(body, bytesRead, length); // The end of the file is reached. if (n == 0) break; bytesRead += n; bytesToRead -= n; } stream.Close(); } } catch (Exception exception) { throw; } } else { body = new byte[0]; } _responseBody = body;

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • What is usefulness of W3C's Semantic Data Extractor in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".

    Read the article

  • What is usefulness of W3C's "Semantic Data Extractor" in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". Is it possible to get pass every site with this tool? on one site i got this error No top-level heading (h1) found, no outline extracted. Is it necessary to have at least a H1 in any webpage?

    Read the article

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >