Search Results

Search found 2730 results on 110 pages for 'jonathan prior'.

Page 88/110 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to Global onRouteRequest directly to onBadRequest?

    - by virtualeyes
    EDIT Came up with this to sanitize URI date params prior to passing off to Play router val ymdMatcher = "\\d{8}".r // matcher for yyyyMMdd URI param val ymdFormat = org.joda.time.format.DateTimeFormat.forPattern("yyyyMMdd") def ymd2Date(ymd: String) = ymdFormat.parseDateTime(ymd) override def onRouteRequest(r: RequestHeader): Option[Handler] = { import play.api.i18n.Messages ymdMatcher.findFirstIn(r.uri) map{ ymd=> try { ymd2Date( ymd); super.onRouteRequest(r) } catch { case e:Exception => // kick to "bad" action handler on invalid date Some(controllers.Application.bad(Messages("bad.date.format"))) } } getOrElse(super.onRouteRequest(r)) } ORIGINAL Let's say I want to return a BadRequest result type for all /foo URIs: override def onBadRequest(r: RequestHeader, error: String) = { BadRequest("Bad Request: " + error) } override def onRouteRequest(r: RequestHeader): Option[Handler] = { if(r.uri.startsWith("/foo") onBadRequest("go away") else super.onRouteRequest(r) } Of course does not work, since the expected return type is Option[play.api.mvc.Handler] What's the idiomatic way to deal with this, create a default Application controller method to handle filtered bad requests? Ideally, since I know in onRouteRequest that /foo is in fact a BadRequest, I'd like to call onBadRequest directly. Should note that this is a contrived example, am actually verifying a URI yyyyMMdd date param, and BadRequest-ing if it does not parse to a JodaTime instance -- basically a catch-all filter to sanitize a given date param rather than handling on every single controller method call, not to mention, avoiding cluttering up application log with useless stack traces re: invalid date parse conversions (have several MBs of these junk trace entries accruing daily due to users pointlessly manipulating the uri date in attempts to get at paid subscriber content)

    Read the article

  • Db4o Mvc Application Architecture

    - by Mac
    I am currently testing out Db4o for an asp.net MVC 2 application idea but there are a few things I'm not quite sure on the best way to proceed. I want my application to use guessable routes rather than Id's for referencing my entities but I also think I need Id's of some sort for update scenarios. so for example I want /country/usa instead of /country/1 I may want to change the key name though (not perhaps on a country but on other entities) so am thinking I need an Id to use as the reference to retrieve the object prior to updating it's fields. From other comments it seems like the UUID is a bit long to be using and would prefer to use my own id's anyway for clean separation of concerns. Looking at both the KandaAlpha project I wasn't too keen on some aspects of the design and prefer something more along the lines of S#arp architecture where they use things like the [domainsignature] and EntityWithTypedId, IEntityDuplicateChecker, IHasAssignedId, BaseObject and IValidatable in their entities to control insert/update behaviour which seems cleaner and more extensible, covers validation and is encapsulated well within the core and base repository classes. So would a port of S#arp architecture to Db4o make sense of am I still thinking rmdbs in an oodb world? Also is there a best practice for managing indexes (including Unique ones as above) in Db4o? Should they be model metadata based and loaded using DI in a bootstrapper for example or should they be more loaded more like Automapper.CreateMap? Its a bit of a rambling question I know but any thoughts, ideas or suggested reading material is greatly appreciated. Thanks Mac

    Read the article

  • VS2008 EF and non crud SP usage.

    - by SteveO
    Using an edmx version of EF. My returned data is a join between tables that has a COMPOUND filter on the primary table. In essence this query is going to return a SEGMENT of Law codes and descriptions that a user can tie to a Sex Offender report. I have a complex SP because Linq2SQL cannot pass in a between statement, or at least that is how I understand the error. The Code itself is broken up by '-' marks. 39-13-504 "Aggravated Sexual Battery" User wants to have a query with 4 parmas 39, 13, 500, 599. Get all codes from Title 39 and Chapter 13 with parts between 500 and 599. I have the SP in place to do the work, is there are way to consume the SP within the EF? I find many blogs about SPs with CRUD operations as their use of an SP. That doesn't fit this need at all. I do not have a single table but a join to the "prior selections" table that maps the key for the code. Any pointers on how to get a READ with an SP? TIA

    Read the article

  • Dojo addOnLoad, but is Dojo loaded?

    - by adamrice32
    I've encountered what seems like a chicken & egg problem, and have what I think is a logical solution. However, it occurred to me that others must have encountered something similar, so I figured I'd float it out there for the masses. The situation is that I want to use dojo's addOnLoad function to queue up a number of callbacks which should be executed after the DOM has completed rendering on the client side. So what I'm doing is as follows: <html> <head> <script type="text/javascript" src="dojo.xd.js"></script> ... </head> <body> ... <script type="text/javascript"> dojo.addOnLoad( ... ); dojo.addOnLoad( ... ); ... </script> </body> </html> Now, the issue is that I seem to be calling dojo.addOnLoad before the entire Dojo library has been downloaded the browser. This makes sense in a way, because the inline SCRIPT contents should be executed before the entire DOM is loaded (and the normal body onload callback is triggered). My question is this - is my approach sound, or would it make more sense to register a normal/standard body onload JavaScript callback to call a function, which does the same work that each of the dojo.addOnLoads is doing in the SCRIPT block. Of course, this begs the question, why would you ever then use dojo.addOnLoad if you're not guaranteed that the Dojo library will be loaded prior to using the library? Hopefully this situation makes sense to someone other than me. Seems like someone else may have encountered this situation. Thoughts? Best Regards, Adam Rice

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • Determine whether .NET assemblies were built from the same source

    - by Clayton
    Does anyone know of a way to compare two .NET assemblies to determine whether they were built from the "same" source files? I am aware that there are some differencing utilities available, such as the plugin for Reflector, but I am not interested in viewing differences in a GUI, I just want an automated way to compare a collection of binaries to see whether they were built from the same (or equivalent) source files. I understand that multiple different source files could produce the same IL, and realise that the process would only be sensitive to differences in the IL, not the original source. The main obstacle to just comparing the byte streams for the two assemblies is that .NET includes a field called "MVID" (Module Version Identifier) the assembly. This appears to have a different value for every compilation, so if you build the same code twice the assembly will be different. A related question is, does anyone know how to force the MVID to be the same for each compilation? This would avoid us needing to have a comparison process that is insensitive to differences in the value of the MVID. A consistent MVID would be preferable, as this means that standard checksums could be used. The background behind this is that a third-party company is responsible for independently reviewing and signing off our releases, prior to us being permitted to release to Production. This includes reviewing the source code. They want to independently confirm that the source code we give them matches the binaries that we earlier built, tested and currently plan to deploy. We are looking for a process that allows them to independently build the system from the source we supply them with, and the compare the checksums against the checksums for the binaries we have tested. thanks

    Read the article

  • Obtaining MAC address

    - by rink.attendant.6
    According to Obtain client MAC address in ASP.NET Application, it is not possible. I am not entirely convinced because whenever I connect to Tim Hortons WiFi, my MAC address is known. Occasionally, the network is slow and I see this URL like this before being redirected to the Connect page: http://timhortonswifi.com/cp/tdl3/index.asp ?cmd=login &switchip=172.30.129.73 &mac=60:6c:66:17:1a:83 &ip=10.40.66.229 &essid=Tim%20Hortons%20WiFi &apname=TDL-ON-NEP-02177-WAP1 &apgroup=02177 &url=http%3A%2F%2Fweather%2Egc%2Eca%2Fcity%2Fpages%2Fon-72_metric_e%2Ehtml So according to this URL, the site knows the IP address of the router, my MAC address, the IP address assigned to my device by the router, the network SSID, some other pieces of information, and the URL I was trying to access prior to connecting. There's two options: Tim Hortons WiFi Basic and Tim Hortons WiFi Plus, where the "Plus" option allows me to connect to any Tim Hortons WiFi access point in Canada automatically with this device. Registration requires an email address, so I'm assuming this is possible by checking the MAC address and storing it in a database that routers ping upon connection. More info here. According to the extension of this page, I can safely assume it is ASP. How are they obtaining this information?

    Read the article

  • Collision Attacks, Message Digests and a Possible solution

    - by Dominar
    I've been doing some preliminary research in the area of message digests. Specifically collision attacks of cryptographic hash functions such as MD5 and SHA-1, such as the Postscript example and X.509 certificate duplicate. From what I can tell in the case of the postscript attack, specific data was generated and embedded within the header of the postscript (which is ignored during rendering) which brought about the internal state of the md5 to a state such that the modified wording of the document would lead to a final MD equivalent to the original. The X.509 took a similar approach where by data was injected within the comment/whitespace of the certificate. Ok so here is my question, and I can't seem to find anyone asking this question: Why isn't the length of ONLY the data being consumed added as a final block to the MD calculation? In the case of X.509 - Why is the whitespace and comments being taken into account as part of the MD? Wouldn't a simple processes such as one of the following be enough to resolve the proposed collision attacks: MD(M + |M|) = xyz MD(M + |M| + |M| * magicseed_0 +...+ |M| * magicseed_n) = xyz where : M : is the message |M| : size of the message MD : is the message digest function (eg: md5, sha, whirlpool etc) xyz : is the acutal message digest value for the message M magicseed_{i}: Is a set random values generated with seed based on the internal-state prior to the size being added. This technqiue should work, as to date all such collision attacks rely on adding more data to the original message. In short, the level of difficulty involved in generating a collision message such that: It not only generates the same MD But is also comprehensible/parsible/compliant and is also the same size as the original message, is immensely difficult if not near impossible. Has this approach ever been discussed? Any links to papers etc would be nice.

    Read the article

  • Creating a shim Stream

    - by spender
    A decompression API that I am using has the following API: Decode(Stream inStream,Stream outStream) I'd like to create a wrapper around this API, such that I can create my own Stream class which offers up the decoded data. Stream decodedStream=new BlaDecodeStream(inStream); So that I can than use this stream as a parameter to the XmlReader constructor in the same way one might use the System.IO.Compression.GZipStream. As far as I can tell, the only other option is set outStream stream to a MemoryStream or to a FileStream and go in two hops. The files I am dealing with are enormous, so neither of these options are particularly attractive. Before I go reinventing the wheel, is there any prior art that I might be able to draw from, or something in the BCL I might have missed? The CircularStream implementation here would go some of the way to helping, but I'm really looking for something similar that would block (as opposed to over/underrun) when the Stream's internal buffer is 'empty' when reading from it and block when the internal buffer is full when writing to it. In this way it could serve as parameter outStream and simultaneously (i.e. from another thread) could be read from by the XmlReader.

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • New to Linux Kernel/Driver development...

    - by CVS-2600Hertz-wordpress-com
    Recently, i began developing a driver of an embedded device running linux. Until now i have only read about linux internals. Having no prior experience in driver devlopment, i am finding it a tad difficult to land my first step. I have downloaded the kernel source-code (v2.6.32). I have read (skimped) Linux Device Drivers (3e) I read a few related posts here on StackOverflow. I understand that linux has a "monolithic" approach. I have built kernel (included existing driver in menuconfig etc.) I know the basics of kconfig and makefile files so that should not be a problem. Can someone describe the structure (i.e. the inter-links) of the various folders in the kernel-source code. In other words given a source-code file, which other files would it refer to for related code (The "#include"-s provide a partial idea) Could someone please help me in getting a better idea? Any help will be greatly appreciated Thank You.

    Read the article

  • Problem reintegrating a branch into the trunk in Subversion 1.5

    - by pako
    I'm trying to reintegrate a development branch into the trunk in my Subversion 1.5 repository. I merged all the changes from the trunk to the development branch prior to this operation. Now when I try to reintegrate the changes from the branch I get the following error message: Command: Reintegrate merge https://dev/svn/branches/devel into C:\trunk Error: Reintegrate can only be used if revisions 280 through 325 were previously Error: merged from https://dev/svn/trunk to the reintegrate Error: source, but this is not the case: Error: branches/devel/images/test Error: Missing ranges: /trunk/images/test:280-324 ... The message then goes on complaining about some folders in my project. But when I try to merge the changes from the trunk to the development branch again, TortoiseSVN tells me that there's nothing to merge (as I already merged all the changes before): Command: Merging revisions 1-HEAD of https://dev/svn/trunk into C:\devel, respecting ancestry Completed: C:\devel I'm trying to follow the instructions from here: http://svnbook.red-bean.com/en/1.5/svn.branchmerge.basicmerging.html, but there's nothing about solving such a problem. Any ideas? Perhaps I should just delete the trunk and then make a copy of my branch? But I'm not really sure if it's safe.

    Read the article

  • In need of help with setting up the open source library JFreeChart

    - by ssbellows
    I am having trouble with setting up the open source library JFreeChart for creating charts using Java. This is the process I have followed so far in trying to set it up: I downloaded the latest version from their download page http://sourceforge.net/projects/jfreechart/files/. I then unpacked the jfreechart-1.0.13.zip in the directory C:\JFreeChart\jfreechart-1.0.13\ on my system drive. In the unpacked directory there is a folder entitled "lib" which contains the packaged .jar files specified as necessary to use JFreeChart. I added the following directory to my classpath: C:\JFreeChart\jfreechart-1.0.13\lib\ I then created a simple program and added the line "import org.jfree.chart.*;" to see if it would compile with a package imported from JFreeChart. I navigated to the folder in which my sample program was contained and compiled with the following command: "javac -classpath C:\ Program.java" I was given the following error: "package org.jfree.chart does not exist" Could someone please give me some input as to what I have done incorrectly in this setup process? This is the first time I've tried using an open source library, so I don't have any prior experience to go on myself. Thank you very much in advance.

    Read the article

  • ASP.Net MVC Object Reference in Edit View when using DropDownListFor()

    - by hermiod
    This question is related to another I ask recently, it can be found here for some background information. Here is the code in the Edit ActionResult: public virtual ActionResult Edit(int id) { ///Set data for DropDownLists. ViewData["MethodList"] = tr.ListMethods(); ViewData["GenderList"] = tr.ListGenders(); ViewData["FocusAreaList"] = tr.ListFocusAreas(); ViewData["SiteList"] = tr.ListSites(); ViewData["TypeList"] = tr.ListTalkbackTypes(); ViewData["CategoryList"] = tr.ListCategories(); return View(tr.GetTalkback(id)); } I add lists to the ViewData to use in the dropdownlists, these are all IEnumerable and are all returning values. GetTalkback() returns an Entity framework object of type Talkback which is generated from the Talkback table. The DropDownListFor code is: <%: Html.DropDownListFor(model=>model.method_id,new SelectList(ViewData["MethodList"] as IEnumerable<SelectListItem>,"Value","Text",Model.method_id)) %> The record I am viewing has values in all fields. When I click submit on the View, I get an Object reference not set to an instance of an object. error on the above line. There are a number of standard fields in the form prior to this, so the error is only occurring on dropdown lists, and it is occurring on all of them. Any ideas? This is my first foray in to MVC, C#, and Entity so I am completely lost!

    Read the article

  • Creating a UIImage from a rotated UIImageView...

    - by Magic Bullet Dave
    I have a UIImageView with an image in it. I have rotated the image prior to display by setting the transform property of the UIImageView to CGAffineTransformMakeRotation(angle) where angle is the angle in radians. I want to be able to create another UIImage that corresponds to the rotated version that I can see in my view. I am almost there, by rotating the image context I get a rotated image: - (UIImage *) rotatedImageFromImageView: (UIImageView *) imageView { UIImage *rotatedImage; // Get image width, height of the bonuding rectangle CGRect boundingRect = [self getBoundingRectAfterRotation: imageView.bounds byAngle:angle]; // Create a graphics context the size of the boundinf rectangle UIGraphicsBeginImageContext(boundingRect.size); CGContextRef context = UIGraphicsGetCurrentContext(); // Rotate and translate the context CGAffineTransform ourTransform = CGAffineTransformIdentity; ourTransform = CGAffineTransformConcat(ourTransform, CGAffineTransformMakeRotation(angle)); CGContextConcatCTM(context, ourTransform); // Draw the image into the context CGContextDrawImage(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage); // Get an image from the context rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)]; // Clean up UIGraphicsEndImageContext(); return rotatedImage; } However the image is not rotated about its centre. I have tried all kinds of transforms concatenated with my rotate to get it to rotate around the centre but to no avail. Am I missing a trick? Is this even possible since I am rotating the context not the image? Getting desperate to make this work now, so any help would be appreciated. Dave

    Read the article

  • Using jQuery to grab the content from CKEditor's iframe

    - by miCRoSCoPiC_eaRthLinG
    Hey guys, I have this custom written CMS that uses CKEditor *(FCKEditor v3) for editing content. Along with that I'm using the jQuery Validation plugin to check all fields for error prior to ajax-based submission. For passing on the data to the PHP backend, I'm using the serialize() function. Problem is, serialize manages to grab all other fields correctly, except for the actual content typed in CKEditor. Like every other WYSIWYG editor, this one too overlays an iframe over an existing textbox. And serialize ignores the iframe and looks only into the textbox for content... which of course, it doesn't find - thus returning a blank content body. My approach to this is to create a hook onto the onchange event of CKEditor and concurrently update the textbox (CKEDITOR.instances.[textboxname].getData() returns the content) or some other hidden field with any changes made in the editor. However, since CKEditor is still in it's beta stage and severely lacks documentation, I cannot seem to find a suitable API call that'll enable me to do so. Does anyone have any idea on how to go about this? Or maybe suggest even a better method?

    Read the article

  • Does HDC use alpha channel?

    - by Crend King
    Hello. Is there a way I can determine if an HDC uses alpha channel? I read Question 333559 and Question 685684, but their questions are about BITMAP. Apparently, some HDC has alpha channel (though they may not use it. Call this "Type 1") while others do not ("Type 2"). I know this by doing the following: Given a HDC, Create a compatible DC, and create a DIB section. Select the created HBITMAP into the compatible DC. BitBlt the source HDC to the compatible DC. Now examine the DIB section bits. For type 2 HDC, after every 3 bytes there is a byte always 0 (like 255 255 255 0); for type 1, these bytes are usualy 255 (like 250 240 230 255). To avoid false positive, I memset the bits to all 0x80 prior to the calls. Use GetDIBits directly on the source HDC, specify the HBITMAP as GetCurrentObject(hdc, OBJ_BITMAP). For both types of HDC, the 4th bytes are always 0. Change the DC bitmap by calling ExtTextOut. For type 2, ExtTextOut always set the 4th bytes to 0. For type 1, ExtTextOut always leave them untouched. I also noticed that the source HDC that are created by APIs (CreateCompatibleDC(), BeginPaint() ...) are always type 2. Type 1 HDC are from standard controls (like menu text). Even the HDC I CreateCompatibleDC from a type 1 becomes a type 2. So, on one hand, I'm frustrated that Microsoft does not provide equal information to developers (another example may be that you cannot know the direction of a HBITMAP after it is created), on the other hand, I'm still wondering is there a way to distinguish these HDC. Thanks for help.

    Read the article

  • Hiding a UINavigationController's UIToolbar during viewWillDisappear:

    - by Nathan de Vries
    I've got an iPhone application with a UITableView menu. When a row in the table is selected, the appropriate view controller is pushed onto the application's UINavigationController stack. My issue is that the MenuViewController does not need a toolbar, but the UIViewControllers which are pushed onto the stack do. Each UIViewController that gets pushed calls setToolbarHidden:animated: in viewDidAppear:. To hide the toolbar, I call setToolbarHidden:animated: in viewWillDisappear:. Showing the toolbar works, such that when the pushed view appears the toolbar slides up and the view resizes correctly. However, when the back button is pressed the toolbar slides down but the view does not resize. This means that there's a black strip along the bottom of the view as the other view transitions in. I've tried adding the toolbar's height to the height of the view prior to hiding the toolbar, but this causes the view to be animated during the transition so that there's still a black bar. I realise I can manage my own UIToolbar, but I'd like to use UINavigationControllers built in UIToolbar for convenience. This forum post mentions the same issue, but no workaround is mentioned.

    Read the article

  • Multiple undo managers for a text view

    - by Rui Pacheco
    Hi, I've a text view that gets its content from an attributed string stored in a model object. I list several of these model objects in a drawer and when the user clicks on one the text view swaps its content. I now need to also swap the undo manager for the text view. I initialise an undo manager on my model object and use undoManagerForTextView to return it to the text view, but something's not quite right. Strategically placed logging statements show me that everything's working as planned: on startup a new model object is initialised correctly and a non-null undo manager is always pulled by the text view. But when it comes to actually doing undo, I just can't get the behaviour I want. I open a window, type something and press cmd+z, and undo works. I open a window, type something, select a new model on the table, type something, go back to the first model and try to undo and all I get is a beep. Something on the documentation made me raise an eyebrow, as it would mean that I can't have undo with several model objects: The default undo and redo behavior applies to text fields and text in cells as long as the field or cell is the first responder (that is, the focus of keyboard actions). Once the insertion point leaves the field or cell, prior operations cannot be undone.

    Read the article

  • ASP.NET Sql Timeout

    - by Petoj
    Well we have this Asp.Net application that we installed at a customer but now some times we get a SqlException that says "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." now the wired thing is that the exception comes instantly when i press the button, this does not happen every time i press the button so its random.. any idea what i could try to pinpoint the problem? We are using the EnterpriseLibrary Database block if that matters... Stack trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteReader(DbCommand command, CommandBehavior cmdBehavior) at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteReader(DbCommand command)

    Read the article

  • "Work stealing" vs. "Work shrugging (tm)"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on a "work shrugging(tm)" as a load-balancing strategy? I am surprised because work-stealing seems to me to have an inherent weakness when implementating efficient fine-grained load-balancing. Vis:- Relying on consumer processors to implement distribution (by actively stealing) begs the question of what these processors do when they find no work? None of the work-stealing references and implementations I have come across so far address this issue satisfactorarily for me. They either:- 1) Manage not to disclose what they do with idle processors! [Cilk] (?anyone know?) 2) Have all idle processors sleep and wake periodically and scatter messages to the four winds to see if any work has arrived [e.g. JAWS] (= way too latent & inefficient for me). 3) Assume that it is acceptable to have processors "spinning" looking for work ( = non-starter for me!) Unless anyone thinks there is a solution for this I will move on to consider a "Work Shrugging(tm)" strategy. Having the task-producing processor distribute excess load seems to me inherently capable of a much more efficient implementation. However a quick google didn't show up anything under the heading of "Work Shrugging" so any pointers to prior-art would be welcome. tx Tags I would have added if I was allowed to [work-stealing]

    Read the article

  • How would you answer Joel's sample programming questions?

    - by Khorkrak
    I recently interviewed a candidate for a new position here. I wish though that I'd read Joel's Guerrilla Guide to Interviewing prior to that interview - naturally I happened upon it the night afterwards :P http://www.joelonsoftware.com/articles/GuerrillaInterviewing3.html So I tried answering the easy questions myself - yeah I used the python interpreter to type stuff in and tested the results a bit - I didn't look up any solutions beforehand though and I also thought about how long it took me to come up with answers for each one and what I'd look for the next time I interview someone. I'd let them type stuff into the interpreter and see how did used python's introspection capabilities too to find out things like what's the re module's method for building a regex etc. Here are my answers - these are in python of course - what are yours in your favourite language? Do you see any issues with the answers I came up with - i.e. how could they be improved upon - what did I miss? Joel's example questions: Write a function that determines if a string starts with an upper-case letter A-Z. import re upper_regex = re.compile("^[A-Z]") def starts_with_upper(text): return upper_regex.match(text) is not None Write a function that determines the area of a circle given the radius. from math import pi def area(radius): return pi * radius**2 Add up all the values in an array. sum([1, 2, 3, 4, 5]) Harder Question: Write an example of a recursive function - so how about the classic factorial one: def factorial(num): if num > 1: return num * factorial(num - 1) else: return 1

    Read the article

  • Problem with XML encoding of database contents with Latin characters

    - by user89691
    I have an ASP Access database that contains strings in various European languages. The database was populated prior by agents in the respective countries. It contains entries with accented etc characters as you would expect. If I open the database with MS Access these characters show up fine. For example the the German equivalent of "Open" shows as "Öffnen" (hopefully you can see an "O" with 2 dots above it!). I have ASP code that reads the database and returns records in XML. The text is passed to XMLEncode to construct the XML, but that only seems to deal with the 5 specials like "<", "&", etc. If I dump the XML the accented characters are unchanged. <English>Open</English> <German>Öffnen</German> If I look at the raw packets with Wireshark I see that the "Ö" byte is hex D6, which appears to be it's decimal Unicode and ISO 8859-1 value. The problem starts when I try to parse the XML in client-side JS. I get: "An invalid character was found in text content" from IE. FF and Chrome happily accept the XML without hiccup but the browser shows the "Ö" character as a diamond with a question mark inside. http://www.validome.org/xml/validate/ reports "encoding error." http://www.w3schools.com/dom/dom_validate.asp thinks it is fine. The XML is UTF-8 encoded. What do I need to do to have IE accept my XML without complaint? What do I need to do to have browsers display the stuff correctly?

    Read the article

  • When does MSBuild set the $(ProjectName) property?

    - by bwerks
    I'm fairly new to MSBuild, and I've done some customization on a Wpf project file that I'm building both in VS2010 and TFS2010. I've customized the output path as follows: <OutputPath Condition=" '$(TeamBuildOutDir)' == '' ">$(SolutionDir)build\binaries\$(ProjectName)\$(Configuration)\$(Platform)</OutputPath> <OutputPath Condition=" '$(TeamBuildOutDir)' != '' ">$(TeamBuildOutDir)binaries\$(ProjectName)\$(Configuration)\$(Platform)</OutputPath> This allows me to build to a centralized binaries directory when building on the desktop, and allows TFS to find the binaries when CI builds are running. However, it seems that in both cases, the $(ProjectDir) property is evaluating to '' at buildtime, which creates strange results. Doing some debugging, it appears as if $(ProjectName) is set by the time BeforeBuild executes, but that my OutputPath property is evaluating it prior to that point. <ProjectNameUsedTooEarly Condition=" '$(ProjectName)' == '' ">true</ProjectNameUsedTooEarly> The preceeding property is in the same property group as my OutputPath property. In the BeforeBuild target, $(ProjectNameUsedTooEarly) evaluates to true, but $(ProjectName) evaluates to the project name as normal by that point. What can I do to ensure that $(ProjectName) has got a value when I use it? edit: I just used Attrice's MSBuild Sidekick to debug through my build file, and in the very first target available for breakpoint (_CheckForInvalidConfigurationAndPlatform) all the properties seem to be set already. ProjectName is already set correctly, but my OutputPath property has already been set using the blank value of ProjectName.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >