Search Results

Search found 2886 results on 116 pages for 'behaviour'.

Page 96/116 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • confusing fork system call

    - by benjamin button
    Hi, i was just checking the behaviour of fork system call and i found it very confusing. i saw in a website that Unix will make an exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate address spaces #include <stdio.h> #include <sys/types.h> int main(void) { pid_t pid; char y='Y'; char *ptr; ptr=&y; pid = fork(); if (pid == 0) { y='Z'; printf(" *** Child process ***\n"); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); sleep(5); } else { sleep(5); printf("\n ***parent process ***\n",&y); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); } } the output of the above program is : *** Child process *** Address is 69002894 char value is Z ***parent process *** Address is 69002894 char value is Y so from the above mentioned statement it seems that child and parent have separet address spaces.this is the reason why char value is printed separately and why am i seeing the address of the variable as same in both child and parent processes.? Please help me understand this!

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • Having an issue with Nullable MySQL columns in SubSonic 3.0 templates

    - by omegawkd
    Looking at this line in the Settings.ttinclude string CheckNullable(Column col){ string result=""; if(col.IsNullable && col.SysType !="byte[]" && col.SysType !="string") result="?"; return result; } It describes how it determines if the column is nullable based on requirements and returns either "" or "?" to the generated code. Now I'm not too familiar with the ? nullable type operator but from what I can see a cast is required. For instance, if I have a nullable integer MySQL column and I generate the code using the default template files it returns a line similar to this: int? _User_ID; When trying to compile the project I get the error: Cannot implicitly convert type 'int?' to 'int'. An explicit conversion exists (are you missing a cast?) I checked teh Settings files for the other database types and they all seems to have the same routine. So my question is, is this behaviour expected or is this a bug? I need to solve it one way or the other before I can procede. Thanks for your help.

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • NOT A DUPLICATE! VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • Associating an object with another object for GC clearup

    - by thecoop
    Is there any way of associating an object instance (object A) with a second object (object B) in a generalised way, so that when B gets collected A becomes eligable for collection? The same behaviour that would happen if B had an instance variable pointing to A, but without explicitly changing the class definition of B, and being able to do this in a dynamic way? The same sort of effect could be done by using the Component.Disposed event in a funky way, but I don't want to make B disposable EDIT I'm basically creating a cache of objects that are associated with a single 'root' object, and I don't want the cache to be static, as there can be lots of root objects using different caches, so lots of memory will be used up when a root object is no longer used but the cached objects are still around. So, I want a collection of cached objects to be associated with each 'root' object, without changing the root object definition. Sort of like metadata of an extra object reference attached to each root object instance. That way, each collection will get collected when the root object is collected, and not hang around like they would if a static cache was used.

    Read the article

  • UIDatePicker date method is picking wrong date: iPhone Dev

    - by prd
    Hi, I am getting very strange behaviour on UIDatePicker. I have a view with date picker declared in .h file as IBOutlet UIDatePicker *datePicker; with property nonatomic and retain. datePicker is properly linked in IB file. In the code I am setting the minimum, maximum, initial date and action to call for UICOntrolEventValueChanged using following code If (!currentDate) { initialDate = [NSDate date]; } else { initialDate = currentdate; } [datePicker setMinimumDate:[NSDate date]]; [datePicker setMaximumDate:[[NSDate date] addTimeInterval:5 * 365.25 * 24 * 60 * 60]]; // to get upto 5 years [datePicker setDate:initialDate animated:YES]; [datePicker addTarget:self action:@selector(getDatePickerValue:) forControlEvents:UIControlEventValueChanged]; In getDatePickerValue, I get the new date using datePicker.date. When the view is closed (using a done button), I get the current value of the date using datePicker.date. Now if the view is called with no 'currentDate', the picker returns 'todays date'. This is what happens the 'first' time my pickerView is called. Each subsequent call to the view, with no 'current date' gives me a different and later date from today. So, first time I get today's date say 9 Jun 2010 second time datePicker.date returns 10 Jun 2010 third time 11 Jun 2010 and so on. Though its not always incremental, but mostly it is. I have put NSLogs, and verified the initial date is set correctly. The problem is only on the device (on OS 3.0), the issue is not replicated on simulator. I can't find what I have done wrong. I hope somebody else has come across similar problem and can help me resolve this.

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Long-held TCP sessions in an ASMX client

    - by John
    Hi, I have an ASP.NET application which talks to a third-party SOAP web service. My application uses an ASMX client proxy (i.e. System.Web.Services.Protocols.SoapHttpClientProtocol). The third-party service uses WCF, although I don't expect that makes much difference. I should note that we're using .NET 3.5 SP1. We haven't customised the proxy or done anything unusual - we're just making standard web service requests and getting back the results. We have encapsulated the proxy reference within a using block so it will get disposed after the response is received. We've been told that our application is behaving strangely in its use of TCP sessions. Instead of opening a new TCP session for each request from a new proxy instance (which is what I would have expected it to do), it's apparently keeping several connections alive and re-using them. This is causing some issues at the third party end, as they are expecting us to be using multiple sessions. Is this a known behaviour for the SoapHttpClientProtocol client proxy? If so, is there any way we can override it so that each request results in a new TCP session? Thanks, John

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • Should i enforce realloc check if the new block size is smaller than the initial ?

    - by nomemory
    Can realloc fail in this case ? int *a = NULL; a = calloc(100, sizeof(*a)); printf("1.ptr: %d \n", a); a = realloc(a, 50 * sizeof(*a)); printf("2.ptr: %d \n", a); if(a == NULL){ printf("Is it possible?"); } return (0); } The output in my case is: 1.ptr: 4072560 2.ptr: 4072560 So 'a' points to the same adress. So should i enforce realloc check ? Later edit: Using MinGW compiler under Windows XP. Is the behaviour similar with gcc on Linux ? Later edit 2: Is it OK to check this way ? int *a = NULL, *b = NULL; a = calloc(100, sizeof(*a)); b = realloc(a, 50 * sizeof(*a)); if(b == NULL){ return a; } a = b; return a;

    Read the article

  • diffuculty in appending images dynamically in an Custom List View

    - by ganesh
    Hi I have written an custom List view which binds images according to the result from a feed @Override public View getView(int position, View convertView, ViewGroup parent) { ViewHolder holder; if (convertView == null) { convertView = mInflater.inflate(R.layout.row_for_stopnames, null); holder = new ViewHolder(); holder.name = (TextView)convertView.findViewById(R.id.stop_name); holder.dis = (TextView)convertView.findViewById(R.id.distance); holder.route_one=(ImageView)convertView.findViewById(R.id.one); holder.route_two=(ImageView)convertView.findViewById(R.id.two); holder.route_three=(ImageView)convertView.findViewById(R.id.three); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.name.setText(elements.get(position).get("stop_name")); holder.dis.setText(elements.get(position).get("distance")); String[] route_txt=elements.get(position).get("route_name").split(","); for(int i=0;i<route_txt.length;i++) { if(i==0) { holder.route_one.setBackgroundResource(Utils.getRouteImage().get(stop_txt[0])); } else if(i==1) { holder.route_two.setBackgroundResource(Utils.getRouteImage().get(stop_txt[1])); } else if(i==2) { holder.route_three.setBackgroundResource(Utils.getRouteImage().get(stop_txt[2])); } } convertView.setOnClickListener(new OnItemClickListener(position,elements)); return convertView; } class ViewHolder { TextView name; TextView dis; ImageView route_one; ImageView route_two; ImageView route_three; } for every stop name there may be route_names, maximum of three routes.I have to bind images according to the number of route names.This is what I tried to do by the above code .This works fine until I start scrolling up and down .When I do so the route images gets displayed where it does not want to be,this behaviour is unpredictable.I will be glad if someone explain me why this happens,and the best way to do this.The getRouteImage method of Utils class returns HashMap with key String and value a drawable

    Read the article

  • rich:editor ruins html?

    - by Ben
    Hi, Strange behaviour. I use rich:editor with these attributes: (Irrelevant data removed) HtmlEditor editor = new HtmlEditor(); editor.setValueExpression("value", ve); editor.setTheme("advanced"); editor.setValueExpression("viewMode", viewModeValueExpression); panel.getChildren().add(editor); Now my problem is that whenever I load a ready-made html text such as this (In source mode): <html lang="en" xml:lang="en"> <head> <title>Done</title> </head> <body style="direction: ltr; font-size: medium; color: #0000FF;"> <p>When the menu loads, navigate to and open Image Editor.</p> </body> </html> Change to VisualMode and then back to SourceMode, I see that the editor removed all of my html data and now the source mode is this: <p>When the menu loads, navigate to and open Chul Muzal.</p> Anyone knows why this happens? Thanks!!

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Is it right that Strophe.addHandler reads only first node from response?

    - by markcial
    I'm starting to learn strophe library usage and when i use addHandler to parse response it seems to read only first node of xml response so when i receive a xml like that : <body xmlns='http://jabber.org/protocol/httpbind'> <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> <presence xmlns='jabber:client' from='test@localhost' to='test2@localhost' xml:lang='en'> <status /> </presence> <iq xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='result'> <query xmlns='jabber:iq:roster'> <item subscription='both' name='test' jid='test@localhost'> <group>test group</group> </item> </query> </iq> </body> With the handler testHandler used like that : connection.addHandler(testHandler,null,"presence"); function testHandler(stanza){ console.log(stanza); } It only logs : <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> What i am missing? is it a right behaviour? Should i add more handlers to get the other stanzas? Thanks for advance

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Why compiling in Flash IDE I cannot access stage in a Sprite constructor before addChild while if I

    - by yuri
    I've created this simple class (omissing package directive and imports) public class Viewer extends Sprite { public function Viewer():void { trace(stage); } } then in Flash IDE I import in first frame this AS: import Viewer var viewer = new Viewer(); stage.addChild(viewer); trace(viewer.stage); and this works as I expected: the first trace called in constructor say stage is "null" because I haven't yet add viewer to a DisplayObjectContainer. The second one output the stage object. So I created a project using AXDT eclipse plugin, I've recreated and compiled only the first class (trashed the AS init script used in Flash IDE because is not needed) and on the first trace ... wow ... the stage is filled with the stage Object. I seems to me that the compiler used by AXDT (Flex4 SDK open source) add the class... before construct it (!?).. to a DisplayObjectContainer already attached to a Stage. I want to understand how can reproduce this behaviour using compiler in Flash IDE so I can directrly access Stage in construction.

    Read the article

  • multiple ajax requests with jquery

    - by Emil
    I got problems with the async nature of Javascript / JQuery. Lets say the following (no latency is counted for, in order to not make it so troublesome); I got three buttons (A, B, C) on a page, each of the buttons adds an item into a shopping cart with one ajax-request each. If I put an intentional delay of 5 seconds in the serverside script (PHP) and pushes the buttons with 1 second apart, I want the result to be the following: Request A, 5 seconds Request B, 6 seconds Request C, 7 seconds However, the result is like this Request A, 5 seconds Request B, 10 seconds Request C, 15 seconds This have to mean that the requests are queued and not run simultaneously, right? Isnt this opposite to what async is? I also tried to add a random get-parameter to the url in order to force some uniqueness to the request, no luck though. I did read a little about this. If you avoid using the same "request object (?)" this problem wont occure. Is it possible to force this behaviour in JQuery? This is the code that I am using $.ajax( { url : strAjaxUrl + '?random=' + Math.floor(Math.random()*9999999999), data : 'ajax=add-to-cart&product=' + product, type : 'GET', success : function(responseData) { // update ui }, error : function(responseData) { // show error } }); I also tried both GET and POST, no difference. I want the requests to be sent right when the button is clicked, not when the previous request is finnished. I want the requests to be run simultaneously, not in a queue.

    Read the article

  • Strange behavior with complex Q object filter queries in Django

    - by HWM-Rocker
    Hi I am trying to write a tagging system for Django, but today I encountered a strange behavior in filter or the Q object (django.db.models.Q). I wrote a function, that converts a search string into a Q object. The next step would be to filter the TaggedObject with these query. But unfortunately I get a strange behavior. when I search (id=20) = Q: (AND: ('tags__tag__id', 20)) and it returns 2 Taged Objects with the ID 1127 and 132 when I search (id=4) = Q: (AND: ('tags__tag__id', 4)) and it returns also 2 Objects, but this time 1180 and 1127 until here is everything fine, but when i make a little bit more complex query like (id=4) or (id=20) = Q: (OR: ('tags__tag__id', 4), ('tags__tag__id', 20)) then it returns 4(!) Objects 1180, 1127, 1127, 132 But the object with the ID 1127 is returned twice, but thats not the behaviour I want. Do I have to live with it, and uniqify that list or can I do something different. The representation of the Q object looks fine for me. But the worst is now, when I search for (id=20) and (id=4) = Q: (AND: ('tags__tag__id', 20), ('tags__tag__id', 4)) then it returns no object at all. But why? The representation should be ok and the object with the id 1127 is tagged by both. What am I missing? Here are also the relevant parts of the classes, that are involved: class TaggedObject(models.Model): """ class that represent a tagged object """ tags = generic.GenericRelation('ObjectTagBridge', blank=True, null=True) class ObjectTagBridge(models.Model): """ Help to connect a generic object to a Tag. """ # pylint: disable-msg=W0232,R0903 content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') tag = models.ForeignKey('Tag') class Tag(models.Model): ... Thanks for your help

    Read the article

  • Why is partial specialziation of a nested class template allowed, while complete isn't?

    - by drhirsch
    template<int x> struct A { template<int y> struct B {};. template<int y, int unused> struct C {}; }; template<int x> template<> struct A<x>::B<x> {}; // error: enclosing class templates are not explicitly specialized template<int x> template<int unused> struct A<x>::C<x, unused> {}; // ok So why is the explicit specialization of a inner, nested class (or function) not allowed, if the outer class isn't specialiced too? Strange enough, I can work around this behaviour if I only partially specialize the inner class with simply adding a dummy template parameter. Makes things uglier and more complex, but it works. Note: I need this feature for recursive templates of the inner class for a set of the outer class. To make things even more complicate, in reality I only need a template function instead of the inner class. But partial specialization of functions is generally disallowed somewhere else in the standard ^^

    Read the article

  • jquery - problem with executing annimation of two separate objects, one after another

    - by MoreThanChaos
    Hello I have problem to put together animations of two separate objects to second one start when first one ends. I tried to use callback but it seems that i make some syntax misteakes which crash jQuery or cause some unexpected behaviour. It seems that i'm stuck so I'd like to ask You for best way to put these animations together to act the way I want. in mouseenter 1st .pp grows, 2nd .tt fade in in mouseleave 1st .tt fade out, 2nd .pp shrink It's alsp relevant that animations doesn't pile up, i mean here animation called by one event doesnt wait until other animation in progress will end. In generall exactly what is below but yo be animated one after another, not simultanously. $('.pp').bind({ mouseenter: function() { $(this).animate({ width: $(this).children(".tt").outerWidth(), height: $(this).children(".tt").outerHeight() },{duration:1000,queue:false} ); $(this).children(".tt").animate({ opacity: 1.0 }, {duration:1000,queue:false}); }, mouseleave: function() { $(this).children(".tt").animate({ opacity: 0 }, {duration:1000,queue:false}); $(this).animate({ width: 17, height: 17 }, {duration:1000,queue:false}); } });

    Read the article

  • Binding a Value from a View-Model to the View-Model of a child User Control in Silverlight?

    - by andrej351
    Hi there, So i have a UserControl for one of my Views and have another 'child' UserControl inside that. The outer 'parent' UserControl has a Collection on its View-Model and a Grid control on it to display a list of Items. I want to place another UserControl inside this UserControl to display a form representing the details of one Item. The outer / parent UserControl's View-Model already has a property on it to hold the currently selected Item and i would like to bind this to a DependancyProperty on the inner / child UserControl. I would then like to bind that DependancyProperty to a property on the child UserControl's View-Model. I can then set the DependancyProperty once in XAML with a binding expression and have the child UserControl do all its work in its View-Model like it should. The code i have looks like this.. Parent UserControl: <UserControl x:Class="ItemsListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel}"> <!-- Grid Control here... --> <ItemDetailsView Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel.SelectedItem}" /> </UserControl> Child UserControl: <UserControl x:Class="ItemDetailsView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel}" ItemDetailsView.Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel.Item, Mode=TwoWay}"> <!-- Form controls here... --> </UserControl> The selected Item is bound to the DependancyProperty fine. However from the DependancyProperty to the child View-Model does not. I've used this sort of apporach in a WPF app without problems. It appears to be a situation where there are two concurrent bindings which need to work but with the same target for two sources. Why won't the second (in the child UserControl) binding work?? Is there a way to acheive the behaviour I'm after?? Cheers.

    Read the article

  • How to use keyword this in a mouse wrapper in right context in Javascript?

    - by MartyIX
    Hi, I'm trying to write a simple wrapper for mouse behaviour. This is my current code: function MouseWrapper() { this.mouseDown = 0; this.OnMouseDownEvent = null; this.OnMouseUpEvent = null; document.body.onmousedown = this.OnMouseDown; document.body.onmouseup = this.OnMouseUp; } MouseWrapper.prototype.Subscribe = function (eventName, fn) { // Subscribe a function to the event if (eventName == 'MouseDown') { this.OnMouseDownEvent = fn; } else if (eventName == 'MouseUp') { this.OnMouseUpEvent = fn; } else { alert('Subscribe: Unknown event.'); } } MouseWrapper.prototype.OnMouseDown = function () { this.mouseDown = 1; // Fire event $.dump(this.OnMouseDownEvent); if (this.OnMouseDownEvent != null) { alert('test'); this.OnMouseDownEvent(); } } MouseWrapper.prototype.OnMouseUp = function () { this.mouseDown = 0; // Fire event if (this.OnMouseUpEvent != null) { this.OnMouseUpEvent(); } } From what I gathered it seems that in MouseWrapper.prototype.OnMouseUp and MouseWrapper.prototype.OnMouseDown the keyword "this" doesn't mean current instance of MouseWrapper but something else. And it makes sense that it doesn't point to my instance but how to solve the problem? I want to solve the problem properly I don't want to use something dirty. My thinking: * use a singleton pattern (mouse is only one after all) * pass somehow my instance to OnMouseDown/Up - how? Thank you for help!

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Why does Microsoft advise against readonly fields with mutable values?

    - by Weeble
    In the Design Guidelines for Developing Class Libraries, Microsoft say: Do not assign instances of mutable types to read-only fields. The objects created using a mutable type can be modified after they are created. For example, arrays and most collections are mutable types while Int32, Uri, and String are immutable types. For fields that hold a mutable reference type, the read-only modifier prevents the field value from being overwritten but does not protect the mutable type from modification. This simply restates the behaviour of readonly without explaining why it's bad to use readonly. The implication appears to be that many people do not understand what "readonly" does and will wrongly expect readonly fields to be deeply immutable. In effect it advises using "readonly" as code documentation indicating deep immutability - despite the fact that the compiler has no way to enforce this - and disallows its use for its normal function: to ensure that the value of the field doesn't change after the object has been constructed. I feel uneasy with this recommendation to use "readonly" to indicate something other than its normal meaning understood by the compiler. I feel that it encourages people to misunderstand the meaning of "readonly", and furthermore to expect it to mean something that the author of the code might not intend. I feel that it precludes using it in places it could be useful - e.g. to show that some relationship between two mutable objects remains unchanged for the lifetime of one of those objects. The notion of assuming that readers do not understand the meaning of "readonly" also appears to be in contradiction to other advice from Microsoft, such as FxCop's "Do not initialize unnecessarily" rule, which assumes readers of your code to be experts in the language and should know that (for example) bool fields are automatically initialised to false, and stops you from providing the redundancy that shows "yes, this has been consciously set to false; I didn't just forget to initialize it". So, first and foremost, why do Microsoft advise against use of readonly for references to mutable types? I'd also be interested to know: Do you follow this Design Guideline in all your code? What do you expect when you see "readonly" in a piece of code you didn't write?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >