Search Results

Search found 33468 results on 1339 pages for 'behaviour change'.

Page 459/1339 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • TextArea being used as an itemEditor misbehaves when the enter key is pressed

    - by ChrisInCambo
    Hi, I have a TextArea inside an itemEditor component, the problem is that when typing in the TextArea if the enter key is pressed the itemEditor resets itself rather moving the caret to the next line as expected: <mx:List width="100%" editable="true" > <mx:dataProvider> <mx:ArrayCollection> <mx:Array> <mx:Object title="Stairway to Heaven" /> </mx:Array> </mx:ArrayCollection> </mx:dataProvider> <mx:itemRenderer> <mx:Component> <mx:Text height="100" text="{data.title}"/> </mx:Component> </mx:itemRenderer> <mx:itemEditor> <mx:Component> <mx:TextArea height="100" text="{data.title}"/> </mx:Component> </mx:itemEditor> </mx:List> </mx:Application> Could anyone advise how I can get around this strange behaviour and make the enter key behave as expected? Thanks, Chris

    Read the article

  • DataGrid Column names don't seem to be binding

    - by Jason
    Sort of a Flex newbie here so bear with me. I've got a DataGrid defined as follows: <mx:Script> ... private function getColumns(names:ArrayCollection):Array { var ret:Array = new Array(); for each (var name:String in names) { var column:DataGridColumn = new DataGridColumn(name); ret.push(column); } return ret; } </mx:Script> <mx:DataGrid id="grid" width="100%" height="100%" paddingTop="0" columns="{getColumns(_dataSetLoader.columnNames)}" horizontalScrollPolicy="auto" labelFunction="labelFunction" dataProvider="{_dataSetLoader.data}" /> ...where _dataSetLoader is an instance of an object that looks like: [Bindable] public class DataSetLoader extends EventDispatcher { ... private var _data:ArrayCollection = new ArrayCollection(); private var _columnNames:ArrayCollection = new ArrayCollection(); ... public function reset():void { _status = NOTLOADED; _data.removeAll(); _columnNames.removeAll(); } ... When reset() is called on the dataSetLoader instance, the DataGrid empties the data in the cells, as expected, but leaves the column names, even though reset() calls _columnNames.removeAll(). Shouldn't the change in the collection trigger a change in the DataGrid?

    Read the article

  • [MFC] Creating multiple dialogs in an MFC app with no main Window, they become children of each othe

    - by John
    (title updated) Following on from this question, now I have a clearer picture what's going on... I have a MFC application with no main window, which exposes an API to create dialogs. When I call some of these methods repeatedly, the dialogs created are parented to each other instead of all being parented to the desktop... I have no idea why. But anyway even after creation, I am unable to change the parent back to NULL or CWnd::GetDesktopWindow()... if I call SetParent followed by GetParent, nothing has changed. So apart from the really weird question of why Windows is magically parenting each dialog to the last one created, is there anything I'm missing to be able to set these windows as children of the desktop? UPDATED: I have found the reason for all this, but not the solution. From my dialog constructor, we end up in: BOOL CDialog::CreateIndirect(LPCDLGTEMPLATE lpDialogTemplate, CWnd* pParentWnd, void* lpDialogInit, HINSTANCE hInst) { ASSERT(lpDialogTemplate != NULL); if (pParentWnd == NULL) pParentWnd = AfxGetMainWnd(); m_lpDialogInit = lpDialogInit; return CreateDlgIndirect(lpDialogTemplate, pParentWnd, hInst); } Note: if (pParentWnd == NULL)pParentWnd = AfxGetMainWnd(); The call-stack from my dialog constructor looks like this: mfc80d.dll!CDialog::CreateIndirect(const DLGTEMPLATE * lpDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, void * lpDialogInit=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::CreateIndirect(void * hDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::Create(const char * lpszTemplateName=0x0000009d, CWnd * pParentWnd=0x00000000) mfc80d.dll!CDialog::Create(unsigned int nIDTemplate=157, CWnd * pParentWnd=0x00000000) MyApp.exe!CMyDlg::CMyDlg(CWnd * pParent=0x00000000) Running in the debugger, if I manually change pParentWnd back to 0 in CDialog::CreateIndirect, everything works fine... but how do I stop it happening in the first place?

    Read the article

  • Interrupting Prototype handler, alert() vs event.stop()

    - by lxs
    Here's the test page I'm using. This version works fine, forwarding to #success: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head> <script type="text/javascript" src="prototype.js"></script> </head><body> <form id='form' method='POST' action='#fail'> <button id='button'>Oh my giddy aunt!</button> <script type="text/javascript"> var fn = function() { $('form').action = "#success"; $('form').submit(); } $('button').observe('mousedown', fn); </script> </form> </body></html> If I empty the handler: var fn = function() { } The form is submitted, but of course we are sent to #fail this time. With an alert in the handler: var fn = function() { alert("omg!"); } The form is not submitted. This is awfully curious. With event.stop(), which is supposed to prevent the browser taking the default action: var fn = function(event) { event.stop(); } We are sent to #fail. So alert() is more effective at preventing a submission than event.stop(). What gives? I'm using Firefox 3.6.3 and Prototype 1.6.0.3. This behaviour also appears in Prototype 1.6.1.

    Read the article

  • Client Web Browser Behavior When Handling 301 Redirect

    - by Jon Swanson
    The RFC seems to suggest that the client should permanently cache the response: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. I'm having a hard time finding concrete browser documentation for any major browser that states how they handle these. I've started digging through the source code of firefox, but quickly got lost. Is the following scenario true for which (if any) browsers, and is there definitive documentation for either Firefox or IE that states as much?: First Time Around: 1.1: User enters link to site A, or clicks on a link directed at Site A 1.2: Browser interprets link at Site A, first time, no cache. Sends GET to Site A. 1.2: Site A responds with 301 Redirect to Site B 1.3: Browser sends GET to Site B. Any Subsequent Times Around: 2.2: User clicks on a link directed at Site A 2.2: Browser sees that, due to a past 301 redirect, Site A should now be Site B. 2.3: Without initiating any request whatsoever at Site A, browser initiates GET at Site B.

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • How to use an Audio Unit on the iPhone

    - by CodeToaster
    I'm looking for a way to change the pitch of recorded audio as it is saved to disk, or played back (in real time). I understand Audio Units can be used for this. The iPhone offers limited support for Audio Units (for example it's not possible to create/use custom audio units, as far as I can tell), but several out-of-the-box audio units are available, one of which is AUPitch. How exactly would I use an audio unit (specifically AUPitch)? Do you hook it into an audio queue somehow? Is it possible to chain audio units together (for example, to simultaneously add an echo effect and a change in pitch)? EDIT: After inspecting the iPhone SDK headers (I think AudioUnit.h, I'm not in front of a Mac at the moment), I noticed that AUPitch is commented out. So it doesn't look like AUPitch is available on the iPhone after all. weep weep Apple seems to have better organized their iPhone SDK documentation at developer.apple.com of late - now its more difficult to find references to AUPitch, etc. That said, I'm still interested in quality answers on using Audio Units (in general) on the iPhone.

    Read the article

  • JSF Float Conversion

    - by Phill Sacre
    I'm using JSF 1.2 with IceFaces 1.8 in a project here. I have a page which is basically a big edit grid for a whole bunch of floating-point number fields. This is implemented with inputText fields on the page pointing at a value object with primitive float types Now, as a new requirement sees some of the fields be nullable, I wanted to change the value object to use Float objects rather than primitive types. I didn't think I'd need to do anything to the page to accomodate this. However, when I make the change I get the following error: /pages/page.xhtml @79,14 value="#{row.targetValue}": java.lang.IllegalArgumentException: argument type mismatch And /pages/page.xhtml @79,14 value="#{row.targetValue}": java.lang.IllegalArgumentException: java.lang.ClassCastException@1449aa1 The page looks like this: <ice:inputText value="#{row.targetValue}" size="4"> <f:convertNumber pattern="###.#" /> </ice:inputText> I've also tried adding in <f:convert convertId="javax.faces.Float" /> in there as well but that doesn't seem to work either! Neither does changing the value object types to Double. I'm sure I'm probably missing something really simple but I've been staring at this for a while now and no answers are immediately obvious!

    Read the article

  • c multithreading

    - by coubeatczech
    hi, there's a weird behaviour in my threads: void * foo(void * nic){ printf("foo"); } void * threadF(void * ukazatel){ printf("1\n"); pthread_t threadT; pthread_attr_t attr; pthread_attr_init (&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threadT,NULL,&foo,(void*)NULL); printf("2\n"); while (!feof(stdin)){ int id, in, out; fscanf(stdin,"%d:%d:%d",&id,&in,&out); } } int main(){ pthread_attr_t attr; pthread_attr_init (&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_t threadT; pthread_create(&vlaknoVstupu,&attr,&threadF,(void*)&data); pthread_join(threadT,NULL); pthread_attr_destroy (&attr); } // I skipped irelevant parts of code... the thing is, that sometimes, the output is 12foo, but usually just 12. Then the function waits for input. I would expect it to be always 12foo. Do anybody know why are my expectations wrong?

    Read the article

  • What can I do about ambigous wildcard patterns in Struts?

    - by Hanno Fietz
    I have a problem finding the right wildcard pattern to extract parts of my URL into action parameters in Struts. This is how I set up the action. The intent of the pattern is to capture the last two path elements and then everything that might precede them. <action name="**/*/*" class="com.example.ObjectAction"> <param name="filter">{1}</param> <param name="type">{2}</param> <param name="id">{3}</param> </action> Calling it with the URL channels/123/transmissions/456 I get the following result (the action just sets the input parameters on a POJO and returns that as XML): <result> <filter>channels/123/transmissions</filter> <id/> <type>456</type> </result> It should be: <result> <filter>channels/123</filter> <id>456</id> <type>transmissions</type> </result> Now, because ** matches all characters including the slash, I guess my pattern allows more than one way to match the URL, and Struts happens to pick one that leaves the id empty. Is the behaviour for multiple possible matches defined somewhere? Can I make the pattern less ambigous? Are there alternative ways of doing this? I'm running Struts 2.0.8. Upgrading to 2.1.9 would give me regex matching, but I got into trouble with Struts' dependencies and my OSGi environment when I went past 2.0.8, so I'd like to stick to that version for now.

    Read the article

  • Problem using Lazy<T> from within a generic abstract class

    - by James Black
    I have a generic class that all my DAO classes derive from, which is defined below. I also have a base class for all my entities, but that is not generic. The method GetIdOrSave is going to be a different type than how I defined SabaAbstractDAO, as I am trying to get the primary key to fulfill the foreign key relationships, so this function goes out to either get the primary key or save the entity and then get the primary key. The last code snippet has a solution on how it will work if I get rid of the generic part, so I think this can be solved by using variance, but I can't figure out how to write an interface that will compile. public abstract class SabaAbstractDAO<T> { ... public K GetIdOrSave<K>(K item, Lazy<SabaAbstractDAO<BaseModel>> lazyitemdao) where K : BaseModel { if (item != null && item.Id.IsNull()) { var itemdao = lazyitemdao.Value; item.Id = itemdao.retrieveID(item); if (item.Id.IsNull()) { return itemdao.SaveData(item); } } return item; } } I am getting this error, when I try to compile: Argument 2: cannot convert from 'System.Lazy<ORNL.HRD.LMS.Dao.SabaCourseDAO>' to 'System.Lazy<ORNL.HRD.LMS.Dao.SabaAbstractDAO<ORNL.HRD.LMS.Models.BaseModel>>' I am trying to call it this way: GetIdOrSave(input.OfferingTemplate, new Lazy<SabaCourseDAO>( () => { return new SabaCourseDAO() { Dao = Dao }; }) ); If I change the definition to this, it works. public K GetIdOrSave<K>(K item, Lazy<SabaCourseDAO> lazyitemdao) where K : BaseModel { So, how can I get this to compile using variance (if needed) and generics, so I can have a very general method that will only work with BaseModel and AbstractDAO<BaseModel>? I expect I should only need to make the change in the method and perhaps abstract class definition, the usage should be fine.

    Read the article

  • How to prevent git merge to merge a specific file from trunk into a branch and vice versa

    - by svenn
    Hi, I am using git while developing VHDL code. I am doing development on a component in a git branch: comp_dev. The component interface does not change, just the code inside the component. Now, this component already exists in the master branch, but in a more stable version, enough for other developers to be able to use the component. The other developers also have branches for their work, and when their code is good they merge their branches back to master. At this stage I need to be able to merge all the changes from master back to my comp_dev branch, which is basically no problem, but sometimes the stable version of the component I am working on do change as a part of other designers work, but not the interface. I have to do manual git merge -s ours on that particular file every time I want to merge, otherwise I get a conflict that I need to solve manually, throwing out their work. The same happens if I want to merge changes in other files back to master. If I forget to do git merge -s ours src/rx/state_machine.vhd comp_dev before I do a git merge, then I end up with either a manual merge, or I accidentally merge an unstable version of the state machine on top of the stable one. Is there a way to temporarily exclude one file from merges?

    Read the article

  • Having an issue with Nullable MySQL columns in SubSonic 3.0 templates

    - by omegawkd
    Looking at this line in the Settings.ttinclude string CheckNullable(Column col){ string result=""; if(col.IsNullable && col.SysType !="byte[]" && col.SysType !="string") result="?"; return result; } It describes how it determines if the column is nullable based on requirements and returns either "" or "?" to the generated code. Now I'm not too familiar with the ? nullable type operator but from what I can see a cast is required. For instance, if I have a nullable integer MySQL column and I generate the code using the default template files it returns a line similar to this: int? _User_ID; When trying to compile the project I get the error: Cannot implicitly convert type 'int?' to 'int'. An explicit conversion exists (are you missing a cast?) I checked teh Settings files for the other database types and they all seems to have the same routine. So my question is, is this behaviour expected or is this a bug? I need to solve it one way or the other before I can procede. Thanks for your help.

    Read the article

  • DataGrid: dynamic DataTemplate for dynamic DataGridTemplateColumn

    - by Lukas Cenovsky
    I want to show data in a datagrid where the data is a collection of public class Thing { public string Foo { get; set; } public string Bar { get; set; } public List<Candidate> Candidates { get; set; } } public class Candidate { public string FirstName { get; set; } public string LastName { get; set; } ... } where the number of candidates in Candidates list varies at runtime. Desired grid layout looks like this Foo | Bar | Candidate 1 | Candidate 2 | ... | Candidate N I'd like to have a DataTemplate for each Candidate as I plan changing it during runtime - user can choose what info about candidate is displayed in different columns (candidate is just an example, I have different object). That means I also want to change the column templates in runtime although this can be achieved by one big template and collapsing its parts. I know about two ways how to achieve my goals (both quite similar): Use AutoGeneratingColumn event and create Candidates columns Add Columns manually In both cases I need to load the DataTemplate from string with XamlReader. Before that I have to edit the string to change the binding to wanted Candidate. Is there a better way how to create a DataGrid with unknown number of DataGridTemplateColumn? Note: This question is based on dynamic datatemplate with valueconverter

    Read the article

  • How do I ensure only the headers are shown for the first item in an ItemsControl in WPF?

    - by Dan Ryan
    I am using MVVM binding an ObservableCollection of children to an ItemsControl. The ItemsControl contains a UserControl used to style the UI for the children. <ItemsControl ItemsSource="{Binding Documents}"> <ItemsControl.ItemTemplate> <DataTemplate> <View:DocumentView Margin="0, 10, 0, 0" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> I want to show a header row for the contents of the ItemsControl but only want to show this once at the top (not for every child). How can I implement this behaviour in the DocumentView user control? Fyi I am using a Grid layout to style the child rows: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="34"/> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*" /> <ColumnDefinition Width="60" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock Grid.ColumnSpan="4" Grid.Row="0" Text="Should only show this at the top"></TextBlock> <Image Grid.Column="0" Grid.Row="1" Height="24" Width="24" Source="/Beazley.Documents.Presentation;component/Icons/error.png"></Image> <ComboBox Grid.Column="1" Grid.Row="1" Name="ContentTypes" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.ContentTypes}" SelectedValue="{Binding ContentType}"/> <TextBox Grid.Column="2" Grid.Row="1" Text="{Binding Path=FileName}"/> <Button Grid.Column="3" Grid.Row="1" Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.RemoveFile}" CommandParameter="{Binding}">Remove</Button> </Grid>

    Read the article

  • How to simulate OutOfMemory exception

    - by Gacek
    I need to refactor my project in order to make it immune to OutOfMemory exception. There are huge collections used in my project and by changing one parameter I can make my program to be more accurate or use less of the memory... OK, that's the background. What I would like to do is to run the routines in a loop: Run the subroutines with the default parameter. Catch the OutOfMemory exception, change the parameter and try to run it again. Do the 2nd point until parameters allow to run the subroutines without throwing the exception (usually, there will be only one change needed). Now, I would like to test it. I know, that I can throw the OutOfMemory exception on my own, but I would like to simulate some real situation. So the main question is: Is there a way of setting some kind of memory limit for my program, after reaching which the OutOfMemory exception will be thrown automatically? For example, I would like to set a limit, let's say 400MB of memory for my whole program to simulate the situation when there is such an amount of memory available in the system. Can it be done?

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Grails , how do I get an object NOT to save

    - by user350325
    Hello I am new to grails and trying to create a form which allows a user to change the email address associated with his/her account for a site I am creating. It asks for the user for their current password and also for the new email address they want to use. If the user enters the wrong password or an invalid email address then it should reject them with an appropriate error message. Now the email validation can be done through constraints in grails, but the password change has to match their current password. I have implemented this check as a method on a service class. See code below: def saveEmail = { def client = ClientUser.get(session.clientUserID) client.email = params.email if(clientUserService.checkPassword(session.clientUserID , params.password) ==false) { flash.message = "Incorrect Password" client.discard() redirect(action:'changeEmail') } else if(!client.validate()) { flash.message = "Invalid Email Address" redirect(action:'changeEmail') } else { client.save(); session.clientUserID = null; flash.message = "Your email address has been changed, please login again" redirect(controller: 'clientLogin' , action:'index') } } Now what I noticed that was odd was that if I entered an invalid email then it would not save the changes (as expected) BUT if I entered the wrong password and a valid email then it would save the changes and even write them back into the database even though it would give the correct "invalid password" error message. I was puzzled so set break points in all the if/else if/else blocks and found that it was hitting the first if statement as expected and not hitting the others , so it would never come accross a call to the save() method, yet it was saved anyway. After a little research I came accross documentation for the discard() method which you can see used in the code above. So I added this but still no avail. I even tried using discard then reloading the client object from the DB again but still no dice. This is very frustrating and I would be grateful for any help, since I think that this should surely not be a complicated requirement!

    Read the article

  • Windows 7 Aero Theme Progress Bar Bug?

    - by jmatthias
    I have ran into what I consider to be a progress bar bug on Windows 7. To demonstrate the bug I created a WinForm application with a button and a progress bar. In the button's 'on-click' handle I have the following code. private void buttonGo_Click(object sender, EventArgs e) { this.progressBar.Minimum = 0; this.progressBar.Maximum = 100; this.buttonGo.Text = "Busy"; this.buttonGo.Update(); for (int i = 0; i <= 100; ++i) { this.progressBar.Value = i; this.Update(); System.Threading.Thread.Sleep(10); } this.buttonGo.Text = "Ready"; } The expected behavior is for the progress bar to advance to 100% and then the button text to change to 'Ready'. However, when developing this code on Windows 7, I noticed that the progress bar would rise to about 75% and then the button text would change to 'Ready'. Assuming the code is synchronous, this should not happen! On further testing I found that the exact same code running on Windows Server 2003 produced the expected results. Furthermore, choosing a non aero theme on Windows 7 produces the expected results. In my mind, this seems like a bug. Often it is very hard to make a progress bar accurate when the long operation involves complex code but in my particular case it was very straight forward and so I was little disappointed when I found the progress control did not accurately represent the progress. Has anybody else noticed this behavior? Has anybody found a workaround?

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • NOT A DUPLICATE! VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • UIDatePicker date method is picking wrong date: iPhone Dev

    - by prd
    Hi, I am getting very strange behaviour on UIDatePicker. I have a view with date picker declared in .h file as IBOutlet UIDatePicker *datePicker; with property nonatomic and retain. datePicker is properly linked in IB file. In the code I am setting the minimum, maximum, initial date and action to call for UICOntrolEventValueChanged using following code If (!currentDate) { initialDate = [NSDate date]; } else { initialDate = currentdate; } [datePicker setMinimumDate:[NSDate date]]; [datePicker setMaximumDate:[[NSDate date] addTimeInterval:5 * 365.25 * 24 * 60 * 60]]; // to get upto 5 years [datePicker setDate:initialDate animated:YES]; [datePicker addTarget:self action:@selector(getDatePickerValue:) forControlEvents:UIControlEventValueChanged]; In getDatePickerValue, I get the new date using datePicker.date. When the view is closed (using a done button), I get the current value of the date using datePicker.date. Now if the view is called with no 'currentDate', the picker returns 'todays date'. This is what happens the 'first' time my pickerView is called. Each subsequent call to the view, with no 'current date' gives me a different and later date from today. So, first time I get today's date say 9 Jun 2010 second time datePicker.date returns 10 Jun 2010 third time 11 Jun 2010 and so on. Though its not always incremental, but mostly it is. I have put NSLogs, and verified the initial date is set correctly. The problem is only on the device (on OS 3.0), the issue is not replicated on simulator. I can't find what I have done wrong. I hope somebody else has come across similar problem and can help me resolve this.

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • confusing fork system call

    - by benjamin button
    Hi, i was just checking the behaviour of fork system call and i found it very confusing. i saw in a website that Unix will make an exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate address spaces #include <stdio.h> #include <sys/types.h> int main(void) { pid_t pid; char y='Y'; char *ptr; ptr=&y; pid = fork(); if (pid == 0) { y='Z'; printf(" *** Child process ***\n"); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); sleep(5); } else { sleep(5); printf("\n ***parent process ***\n",&y); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); } } the output of the above program is : *** Child process *** Address is 69002894 char value is Z ***parent process *** Address is 69002894 char value is Y so from the above mentioned statement it seems that child and parent have separet address spaces.this is the reason why char value is printed separately and why am i seeing the address of the variable as same in both child and parent processes.? Please help me understand this!

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >