Search Results

Search found 5312 results on 213 pages for 'hand e food'.

Page 163/213 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Attempting to find a formula for tessellating rectangles onto a board, where middle square can't be

    - by timemirror
    I'm working on a spatial stacking problem... at the moment I'm trying to solve in 2D but will eventually have to make this work in 3D. I divide up space into n x n squares around a central block, therefore n is always odd... and I'm trying to find the number of locations that a rectangle of any dimension less than n x n (eg 1x1, 1x2, 2x2 etc) can be placed, where the middle square is not available. So far I've got this.. total number of rectangles = ((n^2 + n)^2 ) / 4 ..also the total number of squares = (n (n+1) (2n+1)) / 6 However I'm stuck in working out a formula to find how many of those locations are impossible as the middle square would be occupied. So for example: [] [] [] [] [x] [] [] [] [] 3 x 3 board... with 8 possible locations for storing stuff as mid square is in use. I can use 1x1 shapes, 1x2 shapes, 2x1, 3x1, etc... Formula gives me the number of rectangles as: (9+3)^2 / 4 = 144/4 = 36 stacking locations However as the middle square is unoccupiable these can not all be realized. By hand I can see that these are impossible options: 1x1 shapes = 1 impossible (mid square) 2x1 shapes = 4 impossible (anything which uses mid square) 3x1 = 2 impossible 2x2 = 4 impossible etc Total impossible combinations = 16 Therefore the solution I'm after is 36-16 = 20 possible rectangular stacking locations on a 3x3 board. I've coded this in C# to solve it through trial and error, but I'm really after a formula as I want to solve for massive values of n, and also to eventually make this 3D. Can anyone point me to any formulas for these kind of spatial / tessellation problem? Also any idea on how to take the total rectangle formula into 3D very welcome! Thanks!

    Read the article

  • Java complex validation in Dropwizard?

    - by miku
    I'd like to accept JSON on an REST endpoint and convert it to the correct type for immediate validation. The endpoint looks like this: @POST public Response createCar(@Valid Car car) { // persist to DB // ... } But there are many subclasses of Car, e.g. Van, SelfDrivingCar, RaceCar, etc. How could I accept the different JSON representations on the endpoint, while keeping the validation code in the Resource as concise as something like @Valid Car car? Again: I send in JSON like (here, it's the representation of a subclass of Car, namely SelfDrivingCar): { "id" : "t1", // every Car has an Id "kind" : "selfdriving", // every Car has a type-hint "max_speed" : "200 mph", // some attribute "ai_provider" : "fastcarsai ltd." // this is SelfDrivingCar-specific } and I'd like the validation machinery look into the kind attribute, create an instance of the appropriate subclass, here e.g. SelfDrivingCar and perform validation. I know I could create different endpoints for all kind of cars, but thats does not seem DRY. And I know that I could use a real Validator instead of the annotation and do it by hand, so I'm just asking if there's some elegant shortcut for this problem.

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Front End Developer v/s PHP-MySQL Engineer

    - by user301943
    Hello, I want to decide which of this would be a more viable career option? I am ready to quit my current job and hence I am looking for new opportunity. Current job is maintainence and no more active development. My current role is of a PHP/MySQL Developer. I very well understand web-programming and am comfortable with RoR/Sinatra/Zend MVC/JQuery/JSON manipulation, etc. I understand MySQL InnoDB/MyISAM engine and how one differs from the other, etc. Basically, I could very well manage the deployment of a web-application end-to-end including configuration of Apache/Nginx servers, memcache,etc On the other hand, I am being offered a Sr.Front End Web developer that would require me to extensively write HTML/CSS crossbrowser/crossplatform compliant code. I very well understand XHTML/CSS/Box model etc. I would be working on Drupal for the management of websites. While I understand continuing to work on server-side technologies would always be a good career path, how would the role of Core front-end developer turn out to be? If I take this opportunity, will I eventually get a chance to focus onto UCD, HCI, Information Architect,etc. So are these kinda roles possible if I focus on front end development? No offenses to the Front end developers, just want to understand if this is something I want to gain a mastery over. I have 2 yrs of industry experience after graduating with a MS-Computer Science. Although, I have a CS degree, if I were to take uip serious front-end role; I could probably go back and take up some design/HCI/UI courses. Please advise.

    Read the article

  • Source Control Manager Backend

    - by Gabriel Parenza
    Hi Friends, What do you think is a better approach for Source Control Manager Backend. I am weighing File system vs Hosted Subversion service. Hosted Subversion-- (My company already has another group taking care of this) Advantages: * Zero maintenance on our end * Auto-backup and recovery * Reliability by auto-backup and file redundancy. * File history view in built, file merge, file diff On the other hand, while File system does not have the featured mentioned above but is much more simpler. Moreover, if files are hosted on Linux machine, which is backed up, it takes care of file system crash issues. Subversion will need working copies, which are going to be on this same Linux machine, and hence the need to not have an extra layer. Folks, I am looking for stronger reasons why I should take Subversion instead of keeping thing simple and going with File System. Let me know your opinions. Very thanks in advance, Gabriel. PS: I have explored few Commercial Source Manager, and have decide to go this route as it better suits our need.

    Read the article

  • Getting started with SVG graphics objects in JSF 2.0 pages.

    - by AlanObject
    What I want to do is create web pages with interactive SVG content. I had this working as a Java desktop application using Batik to render my SVG and collect UI events like mouseclick. Now I want to use those SVG graphics files in my JSF (Primefaces) web application in the same way. Trying to get started, I found this didn't work: <h:graphicImage id="gloob" value="images/sprinkverks.svg" alt="Graphic Goes Here"/> I don't mind doing some reading to get up the learning curve. It was just a bit surprising that some google searches didn't turn up anything useful. What I did find suggested that I would have to do this with the f:verbatim tag as if I were hand-coding the HTML. I would then have to add some script to capture the SVG events and feed them back into the AJAX code. If I have to do all that I will, but I was hoping there would be an easier and automated way. So the questions are: How to get the image to render in the first place? How to get the DOM events from the SVG portion of the page back to the backing beans? Much thanks for any pointers.

    Read the article

  • Javascript :(….. Oh!! So its jquery? Now what?? I’m a C# Guy

    - by Shekhar_Pro
    Hi guys I want you to Guide me here. This other day I was working out some AJAX for my ASP.Net website and handling client side code in Java was taking the hell out of me. Then I got my hands on this Book jQuery In Action 2nd Edition and solved my problem with the help of Example code in the book. Now as I checked the contents I got an overview that whatever I had ever thought of doing can be done by this jQuery so easily and quite cleanly. I am actually pretty new to web development (say abt 4months ) and from C# world where we have cool libraries and Simple and Elegant coding style. (yeah including those generic, Ienumerable, lambadas, chained statements.. you got it…) and you know what you’re doing when writing some code. And we have so great IntelliSense to care., and above all we have everything Strongly Typed. But in Javascript everything is so messy.. . (and I don’t know why they are not properly indented.. see page source ) Now tell me what should I do, go straight with jQuery or should I first learn Javascript (like a disciplined boy…I even have a book for that too… got in gift :) …. ) I have seen Is it a good idea to learn JavaScript before learning jQuery? but remember I have already got a project on my hand…

    Read the article

  • Evaluation of environment variables in command run by Java's Runtime.exec()

    - by Tom Duckering
    Hi, I have a scenarios where I have a Java "agent" that runs on a couple of platforms (specifically Windows, Solaris & AIX). I'd like to factor out the differences in filesystem structure by using environment variables in the command line I execute. As far as I can tell there is no way to get the Runtime.exec() method to resolve/evaluate any environment variables referenced in the command String (or array of Strings). I know that if push comes to shove I can write some code to pre-process the command String(s) and resolve enviroment variables by hand (using getEnv() etc). However I'm wondering if there is a smarter way to do this since I'm sure I'm not the only person wanting to do this and I'm sure there are pitfalls in "knocking up" my own implementation. Your guidance and suggestions are most welcome. edit: I would like to refer to environment variables in the command string using some consistent notation such as $VAR and/or %VAR%. Not fussed which. edit: To be clear I'd like to be able to execute a command such as: perl $SCRIPT_ROOT/somePerlScript.pl args on Windows and Unix hosts using Runtime.exec(). I specify the command in config file that describes a list of jobs to run and it has to be able to work cross platform, hence my thought that an environment variable would be useful to factor out the filesystem differences (/home/username/scripts vs C:\foo\scripts). Hope that helps clarify it. Thanks. Tom

    Read the article

  • What makes static initialization functions good, bad, or otherwise?

    - by Richard Levasseur
    Suppose you had code like this: _READERS = None _WRITERS = None def Init(num_readers, reader_params, num_writers, writer_params, ...args...): ...logic... _READERS = new ReaderPool(num_readers, reader_params) _WRITERS = new WriterPool(num_writers, writer_params) ...more logic... class Doer: def __init__(...args...): ... def Read(self, ...args...): c = _READERS.get() try: ...work with conn finally: _READERS.put(c) def Writer(...): ...similar to Read()... To me, this is a bad pattern to follow, some cons: Doers can be created without its preconditions being satisfied The code isn't easily testable because ConnPool can't be directly mocked out. Init has to be called right the first time. If its changed so it can be called multiple times, extra logic has to be added to check if variables are already defined, and lots of NULL values have to be passed around to skip re-initializing. In the event of threads, the above becomes more complicated by adding locking Globals aren't being used to communicate state (which isn't strictly bad, but a code smell) On the other hand, some pros: its very convenient to call Init(5, "user/pass", 2, "user/pass") It simple and "clean" Personally, I think the cons outweigh the pros, that is, testability and assured preconditions outweigh simplicity and convenience.

    Read the article

  • ASP.NET MVC: what mechanic returns ViewModel objects?

    - by Dr. Zim
    As I understand it, Domain Models are classes that only describe the data (aggregate roots). They are POCOs and do not reference outside libraries (nothing special). View models on the other hand are classes that contain domain model objects as well as all the interface specific objects like SelectList. A ViewModel includes using System.Web.Mvc;. A repository pulls data out of a database and feeds them to us through domain model objects. What mechanic or device creates the view model objects, populating them from a database? Would it be a factory that has database access? Would you bleed the view specific classes like System.Web.Mvc in to the Repository? Something else? For example, if you have a drop down list of cities, you would reference a SelectList object in the root of your View Model object, right next to your DomainModel reference: public class CustomerForm { public CustomerAddress address {get;set;} public SelectList cities {get;set;} } The cities should come from a database and be in the form of a select list object. The hope is that you don't create a special Repository method to extract out just the distinct cities, then create a redundant second SelectList object only so you have the right data types.

    Read the article

  • toggling proximity sensor on iPhone loses an event

    - by slugolicious
    I'm using setProximitySensingEnabled and implemented proximityStateChanged in my UIApplication subclass. It looks like if sensing is toggled, that the first "off" event is being lost. My UIApplication class is pretty basic... -(void)proximityStateChanged:(BOOL)state { NSLog(state ? @"ON" : @"OFF"); } In my application delegate, I have a UISwitch that enables/disables the proximity sensor. -(IBAction)toggleProxy:(id)sender { [UIApplication sharedApplication].proximitySensingEnabled = prox.on; } "prox" is my UISwitch. The test works fine when it first starts. I tap the switch to turn it on and then put my hand over the sensor for a second then move it away and get: 2009-03-11 12:43:00.465 Proximity[324:20b] ON 2009-03-11 12:43:02.514 Proximity[324:20b] OFF 2009-03-11 12:43:04.046 Proximity[324:20b] ON 2009-03-11 12:43:05.621 Proximity[324:20b] OFF I then tap the switch to turn it off then tap again to turn it on. Now I get: 2009-03-11 12:43:12.005 Proximity[324:20b] ON 2009-03-11 12:43:14.789 Proximity[324:20b] ON 2009-03-11 12:43:16.467 Proximity[324:20b] OFF 2009-03-11 12:43:17.516 Proximity[324:20b] ON 2009-03-11 12:43:19.077 Proximity[324:20b] OFF Notice I get two ON's before an OFF. The OFF is lost somewhere. I can't replicate this behavior using Google's mobile app so I'm wondering if they're resetting something in between proximity enabling. They don't have the proximity sensor on all the time because if you cover the sensor, the screen doesn't go blank. You have to tilt the phone up and angle it back (to simulate the position it would be in at your ear) and then covering the sensor works. Anyone else playing with the sensor? In my particular app, I'm recording a voice message and when you move the phone away from your ear, I want to pause the recording (when I get an OFF). The first time I move the phone away from my ear, the recording is not paused. However, if I put it to my ear and move it away again, it is paused.

    Read the article

  • Binding a member signal to a function

    - by the_drow
    This line of code compiles correctly without a problem: boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error); However when assigning it to a boost::function or as a callback like this: socket_->async_connect(connection_->remote_endpoint(), boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error)); I'm getting a whole bunch of incomprehensible errors (linked since it's too long to fit here). On the other hand I have succeeded binding a free signal to a boost::function like this: void print(const boost::system::error_code& error) { cout << "session connected"; } int main() { boost::signal<void(const boost::system::error_code &)> connected_; connected_.connect(boost::bind(&print, boost::asio::placeholders::error)); client<>::connection_t::socket_ptr socket_(new client<>::connection_t::socket_t(conn->service())); // shared_ptr of a tcp socket socket_->async_connect(conn->remote_endpoint(), boost::bind(boost::ref(connected_), boost::asio::placeholders::error)); conn->service().run(); // io_service.run() return 0; } This works and prints session connected correctly. What am I doing wrong here?

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Authlogic and password and password confirmation attributes - inaccessible?

    - by adam
    Im trying to test my successfully creates a new user after login (using authlogic). Ive added a couple of new fields to the user so just want to make sure that the user is saved properly. The problem is despite creating a valid user factory, whenever i try to grab its attributes to post to the create method, password and password confirmation are being ommitted. I presuem this is a security method that authlogic performs in the background. This results in validations failing and the test failing. Im wondering how do i get round this problem? I could just type the attributes out by hand but that doesnt seem very dry. context "on POST to :create" do context "on posting a valid user" do setup do @user = Factory.build(:user) post :create, :user => @user.attributes end should "be valid" do assert @user.valid? end should_redirect_to("users sentences index page") { sentences_path() } should "add user to the db" do assert User.find_by_username(@user.username) end end ##User factory Factory.define :user do |f| f.username {Factory.next(:username) } f.email { Factory.next(:email)} f.password_confirmation "password" f.password "password" f.native_language {|nl| nl.association(:language)} f.second_language {|nl| nl.association(:language)} end

    Read the article

  • Table design issues - should I create separate fields or store as a blob

    - by Ali
    Hi guys I'm working on my web based ordering system and we would like to maintain a kind of task history for each of our orders. A hsitory in the sense that we would like to maintain a log of who did what on an order like lets say an order has been entered - we would like to know if the order was acknowledged for an example. Or lets say somebody followed up on the order - etc. Consider that there are numerous situations like this for each order would it be wise to create a schema on the lines of: Orders ID - title - description - date - is_ack - is_follow - ack_by ..... That accounts to a lot of fields - on teh other hand I could have one LongText field called 'history' and fill it with a serialised object holding all the information. However in the latter case I can't run a query to lets say retrieve all orders that have not been acknowledged and stuff like that. With time requirements woudl change and I would be required to modify it to allow for more detailed tracking and that is why I need to set up a way which would be feasible to scale upon yet I don't want to be restricted on the SQL side too much.

    Read the article

  • 'Must Override a Superclass Method' Errors after importing a project into Eclipse

    - by Tim H
    Anytime I have to re-import my projects into Eclipse (if I reinstalled Eclipse, or changed the location of the projects), almost all of my overridden methods are not formatted correctly, causing the error 'The method ?????????? must override a superclass method'. It may be noteworthy to mention this is with Android projects - for whatever reason, the method argument values are not always populated, so I have to manually populate them myself. For instance: list.setOnCreateContextMenuListener(new OnCreateContextMenuListener() { public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { //These arguments have their correct names } }); will be initially populated like this: list.setOnCreateContextMenuListener(new OnCreateContextMenuListener() { public void onCreateContextMenu(ContextMenu arg1, View arg2, ContextMenuInfo arg3) { //This methods arguments were not automatically provided } }); The odd thing is, if I remove my code, and have Eclipse automatically recreate the method, it uses the same argument names I already had, so I don't really know where the problem is, other then it auto-formatting the method for me. This becomes quite a pain having to manually recreate ALL my overridden methods by hand. If anyone can explain why this happens or how to fix it .. I would be very happy. Maybe it is due to the way I am formatting the methods, which are inside an argument of another method?

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • In C# should I reuse a function / property parameter to compute temp result or create a temporary v

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • WPF Grid Row / Column Sizing in Proportion to DesiredSize?

    - by sinibar
    I have two user controls arranged vertically in a grid, both of which can expand to be taller than the grid can accommodate. I've put them in each in a scrollviewer which functionally works. What I want though is to give them them space in proportion to the amount that they want at run time. So if there's 500 height available, the upper control wants 400 and the lower 600, the upper control would get 200 and the bottom 300. I have no idea at design time how much space each will want in proportion to the other, so using 1*, 2* etc. for row height won't work for me. I can hand-code run-time proportional sizing, but am I missing a simple trick in XAML that would get me what I want? Context is as follows (trimmed for brevity)... <Grid> <TabControl> <TabItem> <Grid> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <GroupBox Grid.Row="0" Header="Title Area" /> <ScrollViewer Grid.Row="1" VerticalScrollBarVisibility="Auto"> <UserControl /> </ScrollViewer> <ScrollViewer Grid.Row="2" VerticalScrollBarVisibility="Auto"> <UserControl /> </ScrollViewer> </Grid> </Grid> </TabItem> </TabControl> </Grid>

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • Is there a more useful explanation for UITableViewStylePlain?

    - by mystify
    From the docs: In the plain style, section headers and footers float above the content if the part of a complete section is visible. A table view can have an index that appears as a bar on the right hand side of the table (for example, "a" through "z"). You can touch a particular label to jump to the target section. I find that very hard to grasp. First, this one: if the part of a complete section is visible What do they mean by this? This is paradox. Which one is it? A) Table must be exactly the height of that section. If I have 5 Rows, and each row is 50px high, I must make it 5*50 high. The full section must be visible on the screen. Otherwise, if I have 100 rows but my table view is only 400 high, this will not apply. Nothing will float above my content. Sounds wrong. B) It doesn't matter how high my table view actually is. Header and Footer is floating above the content and I can scroll the section. Makes more sense. But is completely against this nonsense making sentence: 'if the part of a complete section is visible' Can anyone explain it better than they did?

    Read the article

  • What is the proper way to handle non-tracking self tracking entities?

    - by Will
    Self tracking entities. Awesome. Except when you do something like return Db.Users; none of the self-tracking entities are tracking (until, possibly, they are deserialized). Fine. So we have to recognize that there is a possibility that an entity returning to us does not have tracking enabled. Now what??? Things I have tried For the given method body: using (var db = new Database()) { if (update.ChangeTracker.ChangeTrackingEnabled) db.Configurations.ApplyChanges(update); else FigureItOut(update, db); db.SaveChanges(); update.AcceptChanges(); } The following implementations of FigureItOut all fail: db.Configurations.Attach(update); db.DetectChanges(); Nor db.Configurations.Attach(update); db.Configurations.ApplyCurrentValues(update); Nor db.Configurations.Attach(update); db.Configurations.ApplyOriginalValues(update); Nor db.Configurations.Attach(update); db.Configurations.ApplyChanges(update Nor about anything else I can figure to throw at it, other than Getting the original entity from the database Comparing each property by hand Updating properties as needed What, exactly, am I supposed to do with self-tracking entities that aren't tracking themselves??

    Read the article

  • Problem with response.redirect sending incorrect HTTPMethod

    - by Andy Macnaughton-Jones
    Hi, I've got a strange problem with a Response.Redirect. I'm using VB.NET with the .NET 2 framework (so VS2005 & SP1). I've got a page that I do a form submit on (that's a proper form method="POST" hard-coded onto the page) and that properly posts me back the page data which is then processed. As part of that processing the system determines if we need to get sent to another URL after processing has been complete. So the request.httpmethod = "POST". So if the "GotoPage" parameter has a URL specified we then do a response.redirect(URL, false). (False as we want page processing to complete in order to write some timing logs etc). The page correctly redirects but instead of the response having a "GET" as the request.httpmethod it has a "POST" instead ! Now, we're using our own custom framework so that we use the HTTPRequest method to determine if a page has been posted back or is being "Getted" so the "IsPagePostBack" property doesn't work (that only works when you're using the normal .NET controls and form submissions). In all other instances our code works happily but what might be causing the Request.httpMethod to not be being set correctly ? I've tried doing a response.clear before the redirect in case headers are being written out before hand but to no avail. Any clues ?! thanks, Andy

    Read the article

  • Correct use of WSDL-generated sources

    - by John K
    How can I easily convert between manually written classes and WSDL-generated equivalents? I have a Java SE 6 thick client that calls a web service to get and store data. The client has a DAO that works with my entity classes, calls <Entity.toDto() to convert them to DTOs, and sends/receives that data with the web service. My issue stems from the fact that the entity classes live on both sides of the service interface: client and server. Each entity has a constructor from the DTO and a toDto function: public class EntityClass { public EntityClass(EntityClassDto dto); public EntityClassDto toDto(); ... } This means I have a handwritten DTO class that the client and server both use. However, the service interface expects the WSDL-generated classes. I have tried writing conversion code between the hand-written DTO and the WSDL-generated DTO and it is tedious and error-prone. What is a reasonable alternative to this? Some back-story: The thick client should be able to have a configurable backend: either direct to the DB or through this web service. The aforementioned DAO is the web service based implementation and another imlpementation that is JPA-based exists.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >