Search Results

Search found 31165 results on 1247 pages for 'coin change'.

Page 273/1247 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Why I'm unable to use up/down arrows in my BIOS?

    - by nexus
    My BIOS is loading and when I wanted to change anything, the up/down arrows aren't working. What might be the problem? I want to change boot options, but I'm unable to select the boot from CD option. My BIOS version is f2.3. Do I need to upgrade it? I need my system for 2 more months continuously and I'm afraid if there might be some problem upgrading. But anyway, what might be the cause for this behavior? For you info I have the pavilion dv6 series laptop, BIOS version: f.23 processer: Intel(R) core(TM) i3 CPU M 370 @ 2.40ghz 2.40GHZ 2.40 GHZ

    Read the article

  • Multiple variables for the same command

    - by Lowzenza
    I'm new to cmd and was wondering if there's an easier way of retpying a variable in a command. For example, I have to do two commands for a set of 96 files and each time I would hit the up arrrow key, get my old commands back and change a variable from 1 to 2, then 2 to 3 and so forth. i.e.: Desktop\InitialProcess_230 Process230input.fasta -output Process230.fasta Then each time I want to do the next file which would be InitialProcess_231 and so on, I would change that in the command by scrolling along and removing 0 and putting a 1. Doing that for almost a 100 files seems like a hassle.

    Read the article

  • Excel: Find a specific cell and paste the value from a control cell into it

    - by G-Edinburgh
    I have two columns one containing the room number, e.g. B-CL102, the other containing a varying integer. I want to enter a different, manually determined, integer in a third column. Whether by macro or native Excel, is there a way to use two control cells at the top of the sheet, type the room number into one and the different integer matching that room into another. I have minimal experience with macros essentially just the basics. I tried to use a V-Lookup formula to look at the two control cells (Range) and then fill in the new column, however I don't know how to then fix that value so that it doesn't change when I change the values in the control cells.

    Read the article

  • Why is the word PERSONAL still relevant in the term PC? [closed]

    - by Bill
    I have spent half an hour trying to change an icon on my Win-7-64 machine (Why Can't I Change the Icon). One reasonable suggestion (reasonable in terms of having a solution, not reasonable in terms of having to jump through these hoops for such a basic requirement) was to delete the old icon from the %userprofile% \ Local Settings..., however when I click on this folder in Windows Explorer I am told the folder is not accessible - Access Denied. Well! It's my PERSONAL computer isn't it? Isn't that what PC stands for? It's MY computer - why can't I get access to that folder? It's about time we started calling these machines MCs (Microsoft Computer), or WCs (Windows Computer) - because they sure as hell aint PERSONAL damn computers!!!!

    Read the article

  • Roughly, what percentage of users will reach changed DNS?

    - by user3722246
    If my main server go offline for some reason for +1hrs, I'm planning to make a DNS change so users will access secondary server. It is not a perfect solution to decrease downtime but it is simple and would work. I'm not sure about its usefulness. So I have a question. If I'm going to make a DNS change to an A record for my domain (changing from one IP to another), what percentage of users are moved over to the new info in 2hrs? (roughly) I know this is a vague question and there are lots of variables but any input is welcomed because I had painful downtime experiences and don't want to experience it again. Thanks

    Read the article

  • How do synchronize two folders in Windows 7 in real-time?

    - by acme
    I want Windows 7 to synchronize two folders in real time (maybe running a service that monitors a folder)? Basically I want to monitor a folder and synchronize each change (new files, changed files, deleted files) to another drive. It has to be in real time, so it gets synchronized instantly when a change happens. A one-direction synchronisation is enough. I tried Microsofts SyncToy, but it does only syncing by hand or scheduled. Can this be achieved with Windows 7 itself or does anyone know a freeware application for this?

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • jquery $.getJSON only works once in internet explorer Help Please!!!

    - by JasperS
    I have a php function which inserts a searchbar into each page on a website. The site checks to see if the user has javascript enabled and if they do it inserts some jquery ajax stuff to link select boxes (instead of using it's fallback onchange="form.submit()"). $.getJSON works perfectly for me in other browsers except in IE, if I do a full page refresh (ctrl+F5) in IE my ajax works flawlessly until I navigate to a new page (or the same page with $PHP_SELF) either by submiting the form or clicking a link the jquery onchange function fires but then jquery throws an error: Webpage error details Message: Object doesn't support this property or method Line: 123 Char: 183 Code: 0 URI: http://~#UNABLE~TO~DISCLOSE#~/jquery-1.4.2.min.js It seems like jquery function $.getJSON() is gone??? This seems to be some kind of caching issue as it happens on the second page load but I think i've go all the caching prevention in place anyways, here's a snipet of the code that ads the jquery functions: if (isset($_SESSION['NO_SCRIPT']) == true && $_SESSION['NO_SCRIPT'] == false) { $html .= '<script type="text/javascript" charset="utf-8">'; $html .= '$.ajaxSetup({ cache: false });'; $html .= '$.ajaxSetup({"error":function(XMLHttpRequest,textStatus, errorThrown) { alert(textStatus); alert(errorThrown); alert(XMLHttpRequest.responseText); }});'; $html .= '</script>'; $html .= '<script type="text/javascript" charset="utf-8">'; $html .= '$(function(){ $("select#searchtype").change(function() { '; $html .= 'alert("change fired!"); '; $html .= '$.getJSON("ajaxgetcategories.php", {id: $(this).val()}, function(j) { '; $html .= 'alert("ajax returned!"); '; $html .= 'var options = \'\'; '; $html .= 'options += \'<option value="0" >--\' + j[0].all + \'--</option>\'; '; $html .= 'for (var i = 0; i < j.length; i++) { options += \'<option value="\' + j[i].id + \'">\' + j[i].name + \'</option>\'; } '; $html .= '$("select#searchcategory").html(options); }) }) }) '; $html .= '</script> '; $html .= '<script type="text/javascript" charset="utf-8"> '; $html .= '$(function(){ $("select#searchregion").change(function() { '; $html .= 'alert("change fired!"); '; $html .= '$.getJSON("ajaxgetcountries.php", {id: $(this).val()}, function(j) { '; $html .= 'alert("ajax returned!"); '; $html .= 'var options = \'\'; '; $html .= 'options += \'<option value="0" >--\' + j[0].all + \'--</option>\'; '; $html .= 'for (var i = 0; i < j.length; i++) { options += \'<option value="\' + j[i].id + \'">\' + j[i].name + \'</option>\'; } '; $html .= '$("select#searchcountry").html(options); }) }) }) '; $html .= '</script> '; }; return $html; remember, this is part of a php funtion that inserts a script into the html and sorry if it looks a bit messy, I'm new to PHP and Javascript and I'm a bit untidy too :) Please also remember that this works perfectly in IE on the first visit but after any navigation I get the error. Thanks guys

    Read the article

  • UIButtons in UIScrollView stop it scrolling, how to fix?

    - by Mark McFarlane
    Hi all, I have a problem with adding a set of UIButtons to a UIScrollView. With the buttons added it seems that the scroll command is not being passed to the scroll view as the buttons cover the whole surface. I've read various posts on this but still can't figure it out. I'm pretty new to iPhone programming so there's probably something obvious I've missed. I've tried things like canCancelContentTouches and delaysContentTouches to TRUE and FALSE. Here is the code I use to add the buttons to the scrollview. The scrollview was created in IB and is passed in to the function: -(void)drawCharacters:(NSMutableArray *)chars:(UIScrollView *)target_view { //NSLog(@"CLEARING PREVIOUS BUTTONS"); //clear previous buttons for parts/kanji if(target_view == viewParts){ for (UIButton *btn in parts_buttons) { [btn removeFromSuperview]; } [parts_buttons removeAllObjects]; } else if(target_view == viewKanji){ for (UIButton *btn in kanji_buttons) { [btn removeFromSuperview]; } [kanji_buttons removeAllObjects]; } //display options int chars_per_line = 9; //change this to change the number of chars displayed on each line if(target_view == viewKanji){ chars_per_line = 8; } int char_gap = 0; //change this to change the margin between chars int char_dimensions = (320/chars_per_line)-(char_gap*2); //set starting x, y coords int x = 0, y = 0; //increment y coord y += char_gap; //NSLog(@"ABOUT TO DRAW FIRST BUTTON"); for(NSMutableArray *char_arr in chars){ //increment x coord x += char_gap; //draw at x and y UIButton *myButton = [UIButton buttonWithType:UIButtonTypeCustom]; myButton.frame = CGRectMake(x, y, char_dimensions, char_dimensions); // position in the parent view and set the size of the button [myButton setTitle:[char_arr objectAtIndex:0] forState:UIControlStateNormal]; [myButton setTitleColor:[UIColor darkGrayColor] forState:UIControlStateNormal]; [myButton setTitleColor:[UIColor colorWithRed:0.8 green:0.8 blue:0.8 alpha:1.0] forState:UIControlStateDisabled]; myButton.titleLabel.font = [UIFont boldSystemFontOfSize:22]; if(target_view == viewKanji){ myButton.titleLabel.font = [UIFont boldSystemFontOfSize:30]; } // add targets and actions if(target_view == viewParts){ [myButton addTarget:self action:@selector(partSelected:) forControlEvents:UIControlEventTouchUpInside]; } else if(target_view == viewKanji){ [myButton addTarget:self action:@selector(kanjiSelected:) forControlEvents:UIControlEventTouchUpInside]; [myButton setTag:(int)[char_arr objectAtIndex:1]]; } //if the part isnt in the current list of kanji, disable and dim it if(target_view == viewParts){ if([kanji count] > 0){ [myButton setEnabled:NO]; } bool do_break = NO; for(NSMutableArray *arr in kanji){ //NSLog([NSString stringWithFormat:@"CHECKING PARTS AGAINST %d KANJI", [kanji count]]); for(NSString *str in [[arr objectAtIndex:2] componentsSeparatedByString:@" "]){ if(([myButton.titleLabel.text isEqualToString:str])){ //NSLog(@"--------------MATCH!!!-----------------"); [myButton setEnabled:YES]; for(NSString *str1 in parts_selected){ if(([myButton.titleLabel.text isEqualToString:str1])){ [myButton setEnabled:NO]; break; } } do_break = YES; break; } if(do_break) break; } if(do_break) break; } } // add to a view [target_view addSubview:myButton]; //update coords of next button x += char_dimensions+ char_gap; if (x > (320-char_dimensions)) { x = 0; y += char_dimensions + (char_gap*2); } //add button to global array to be removed from view next update if(target_view == viewParts){ [parts_buttons addObject:myButton]; } else if(target_view == viewKanji){ [kanji_buttons addObject:myButton]; } } //NSLog(@"FINISHED DRAWING ALL BUTTONS"); } Any help on this would be greatly appreciated. I just need to fix this to finish my app.

    Read the article

  • How to use JQuery to set the value of 2 html form select elements depending on the value of another

    - by Chris Stevenson
    My Javascript and JQuery skills are poor at best and this is ** I have the following three elements in a form : <select name="event_time_start_hours"> <option value="blank" disabled="disabled">Hours</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="01">1</option> <option value="02">2</option> <option value="03">3</option> <option value="04">4</option> <option value="05">5</option> <option value="06">6</option> <option value="07">7</option> <option value="08">8</option> <option value="09">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="midnight">Midnight</option> <option value="midday">Midday</option> </select> <select name="event_time_start_minutes"> <option value="blank" disabled="disabled">Minutes</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="00">00</option> <option value="15">15</option> <option value="30">30</option> <option value="45">45</option> </select> <select name="event_time_start_ampm"> <option value="blank" disabled="disabled">AM / PM</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="am">AM</option> <option value="pm">PM</option> </select> Quite simply, when either 'midnight' or 'midday' is selected in "event_time_start_hours", I want the values of "event_time_start_minutes" and "event_time_start_ampm" to change to "00" and "am" respectively. My VERY poor piece of Javascript says this so far : $(document).ready(function() { $('#event_time_start_hours').change(function() { if($('#event_time_start_hours').val('midnight')) { $('#event_time_start_minutes').val('00'); } }); }); ... and whilst I'm not terribly surprised it doesn't work, I'm at a loss as to what to do next. I want to do this purely for visual reasons for the user as when the form submits I ignore the "minutes" and "am/pm". I'm trying to decide whether it would be best to change the selected values, change the selected values and then disable the element or hide them altogether. However, without any success in getting anything to happen at all I haven't been able to try the different approaches to see what feels right. I've ruled out the obvious things like a duplicate element ID or simply not linking to JQuery. Thank you.

    Read the article

  • Fonts, goes back to default size

    - by Bladimir Ruiz
    Every time I change the font, it goes back to the default size, which is 12, even if I change it before with the "Tamano" menu, it only goes back to 12 every time, my guess would be the way I change the size with deriveFont(), but don't I now any other way to change it. public static class cambiar extends JFrame { public cambiar() { final Font aryal = new Font("Comic Sans MS", Font.PLAIN, 12); JFrame ventana = new JFrame("Cambios en el Texto!"); JPanel adentro = new JPanel(); final JLabel texto = new JLabel("Texto a Cambiar!"); texto.setFont(aryal); JMenuBar menu = new JMenuBar(); JMenu fuentes = new JMenu("Fuentes"); /* Elementos de Fuentes */ JMenuItem arial = new JMenuItem("Arial"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Font arrrial = new Font("Arial", Font.PLAIN, 12); float tam = (float) texto.getFont().getSize(); String hola = String.valueOf(tam); texto.setFont(arrrial); texto.setFont(texto.getFont().deriveFont(tam)); } }); fuentes.add(arial); /* FIN Fuentes */ JMenu tamano = new JMenu("Tamano"); /* Elementos de Tamano */ JMenuItem font13 = new JMenuItem("13"); font13.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(texto.getFont().deriveFont(23.0f)); } }); JMenuItem font14 = new JMenuItem("14"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font15 = new JMenuItem("15"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font16 = new JMenuItem("16"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font17 = new JMenuItem("17"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font18 = new JMenuItem("18"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font19 = new JMenuItem("19"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font20 = new JMenuItem("20"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); tamano.add(font13); /* FIN tanano */ JMenu tipo = new JMenu("Tipo"); /* Elementos de tipo */ /* FIN tipo */ /* Elementos del JMENU */ menu.add(fuentes); menu.add(tamano); menu.add(tipo); /* FIN JMENU */ /* Elementos del JPanel */ adentro.add(menu); adentro.add(texto); /* FIN JPanel */ /* Elementos del JFRAME */ ventana.add(adentro); ventana.setVisible(true); ventana.setSize(250, 250); /* FIN JFRAME */ } } Thanks in Advance!

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Public class DiscoLight help

    - by luvthug
    Hi All, If some one can point me in the right direction for this code for my assigment I would really appreciate it. I have pasted the whole code that I need to complete but I need help with the following method public void changeColour(Circle aCircle) which is meant to allow to change the colour of the circle randomly, if 0 comes the light of the circle sgould change to red, 1 for green and 2 for purple. public class DiscoLight { /* instance variables */ private Circle light; // simulates a circular disco light in the Shapes window private Random randomNumberGenerator; /** * Default constructor for objects of class DiscoLight */ public DiscoLight() { super(); this.randomNumberGenerator = new Random(); } /** * Returns a randomly generated int between 0 (inclusive) * and number (exclusive). For example if number is 6, * the method will return one of 0, 1, 2, 3, 4, or 5. */ public int getRandomInt(int number) { return this.randomNumberGenerator.nextInt(number); } /** * student to write code and comment here for setLight(Circle) for Q4(i) */ public void setLight(Circle aCircle) { this.light = aCircle; } /** * student to write code and comment here for getLight() for Q4(i) */ public Circle getLight() { return this.light; } /** * Sets the argument to have a diameter of 50, an xPos * of 122, a yPos of 162 and the colour GREEN. * The method then sets the receiver's instance variable * light, to the argument aCircle. */ public void addLight(Circle aCircle) { //Student to write code here, Q4(ii) this.light = aCircle; this.light.setDiameter(50); this.light.setXPos(122); this.light.setYPos(162); this.light.setColour(OUColour.GREEN); } /** * Randomly sets the colour of the instance variable * light to red, green, or purple. */ public void changeColour(Circle aCircle) { //student to write code here, Q4(iii) if (getRandomInt() == 0) { this.light.setColour(OUColour.RED); } if (this.getRandomInt().equals(1)) { this.light.setColour(OUColour.GREEN); } else if (this.getRandomInt().equals(2)) { this.light.setColour(OUColour.PURPLE); } } /** * Grows the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is incremented in steps of 2, * the xPos and yPos are decremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void grow(int size) { //student to write code here, Q4(iv) } /** * Shrinks the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is decremented in steps of 2, * the xPos and yPos are incremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void shrink(int size) { //student to write code here, Q4(v) } /** * Expands the diameter of the light by the amount given by * sizeIncrease (changing colour as it grows). * * The method then contracts the light until it reaches its * original size (changing colour as it shrinks). */ public void lightCycle(int sizeIncrease) { //student to write code here, Q4(vi) } /** * Prompts the user for number of growing and shrinking * cycles. Then prompts the user for the number of units * by which to increase the diameter of light. * Method then performs the requested growing and * shrinking cycles. */ public void runLight() { //student to write code here, Q4(vii) } /** * Causes execution to pause by time number of milliseconds */ private void delay(int time) { try { Thread.sleep(time); } catch (Exception e) { System.out.println(e); } } }

    Read the article

  • QGraphicsView and custom Cursors

    - by Etienne de Martel
    I am trying to make use of a mix of custom cursors and preset cursors for my QGraphicsView. In my implementation we have created a notion of "modes" for the view. Meaning that depending on what "mode" the user is in, different things will happen on the left-click, or left-click drag. Anyway, none of that is the problem, just the context. The problem arises when I try to change the cursor for each mode. For instance, for mode 1 we want to show the regular Arrow cursor, but for mode 2, we want to use a custom pixmap. Seemingly simple we call graphicsview->viewport()->setCursor(Qt::QArrowCursor)  when we are switching to mode 1, and graphicsview->viewport()->setCursor(our custom cursor) for mode 2. Except it doesn't work at all. Firstly, the cursor does not change to the custom cursor. That is the first problem. However, if through another operation the drag mode of the graphics view gets set to ScrollHandDrag, the cursor will switch to the custom cursor once the drag operation is complete. Weird. But the plot thickens... Once we switch to the custom cursor, it can never be changed back to the ArrorCursor no matter how many times we call setCursor(Qt::QArrowCursor). it also doesn't seem to matter whether I call setCursor on the viewport or the graphics view itself. So, just for fun, I added a call to graphicsview->unsetCursor() just before we want to change the cursor, and that at least rectifies the second problem. The cursor changes just fine so long as we do a little HandDragging in between. Better, but certainly not optimal. However it should be noted, that doing the unsetCursor on the viewport doesn't work. it must absolutely be done on the graphicsview - regardless of the fact that we are setting the cursor on the viewport. To completely patch over the problem I have added these two lines after I set the cursor: graphicsview->setDragMode(QGraphicsView::ScrollHandDrag); graphicsview->setDragMode(QGraphicsView::NoDrag); Which works, but ye gads!! So something magical is happening inside these two methods that fixes the problem, but glancing at the code I don't see what. Something to do with the fact that the drag mode is changing the cursor I imagine. Just for completeness, I should also mention that the thing that triggers the mode change, is a QPushButton that has been added to the scene using QGraphicsScene->addWidget(). I don't know if that has anything to do with it, but you never know. I am hoping that either someone could clarify why I need to make these seemingly random calls. I don't think I am doing anything wrong anywhere. Thanks in advance for any help. EDIT: Here is an actual code example with the cursor patches as described above. You can look at and/or download them from the link below. It was a little long to paste here. I included the framework around which the cursors are changed, because I have a funny feeling that that is important somehow. https://gist.github.com/712654 The code where the problem lies is in MyGraphicsView.cpp starting at line 104. This is where the cursor is set in the graphics view. It is exactly as described above. Keep in mind, with the very ugly patches in place the cursors do work - more or less. Without those lines you will see very clearly the problems listed in the post above. Also included in the link, is all the code for a mainWindow that uses the view, etc... the only thing missing are the images I am using. But the images themselves don't matter, any 16x16 pngs will do.

    Read the article

  • "didChangeSection:" NSfetchedResultsController delegate method not being called

    - by robenk
    I have a standard split view controller, with a detail view and a table view. Pressing a button in the detail view can cause the an object to change its placement in the table view's ordering. This works fine, as long as the resulting ordering change doesn't result in a section being added or removed. I.e. an object can change it's ordering in a section or switch from one section to another. Those ordering changes work correctly without problems. But, if the object tries to move to a section that doesn't exist yet, or is the last object to leave a section (therefore requiring the section its leaving to be removed), then the application crashes. NSFetchedResultsControllerDelegate has methods to handle sections being added and removed that should be called in those cases. But those delegate methods aren't being called for some reason. The code in question, is boilerplate: - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { NSLog(@"willChangeContent"); [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { NSLog(@"didChangeSection"); switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { NSLog(@"didChangeObject"); UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath]withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { NSLog(@"didChangeContent"); [self.tableView endUpdates]; [detailViewController.reminderView update]; } Starting the application, and then causing the last object to leave a section results in the following output: 2011-01-08 23:40:18.910 Reminders[54647:207] willChangeContent 2011-01-08 23:40:18.912 Reminders[54647:207] didChangeObject 2011-01-08 23:40:18.914 Reminders[54647:207] didChangeContent 2011-01-08 23:40:18.915 Reminders[54647:207] *** Assertion failure in -[UITableView _endCellAnimationsWithContext:], /SourceCache/UIKit_Sim/UIKit-1145.66/UITableView.m:825 2011-01-08 23:40:18.917 Reminders[54647:207] Serious application error. Exception was caught during Core Data change processing: Invalid update: invalid number of sections. The number of sections contained in the table view after the update (5) must be equal to the number of sections contained in the table view before the update (6), plus or minus the number of sections inserted or deleted (0 inserted, 0 deleted). with userInfo (null) As you can see, "willChangeContent", "didChangeObject" (moving the object in question), and "didChangeContent" were all called properly. Based on the Apple's NSFetchedResultsControllerDelegate documentation "didChangeSection" should have been called before "didChangeObject", which would have prevented the exception causing the crash. So I guess the question is how do I assure that didChangeSection gets called? Thanks in advance for any help!

    Read the article

  • Accessing Layout Items from inside Widget AppWidgetProvider

    - by cam4mav
    I am starting to go insane trying to figure this out. It seems like it should be very easy, I'm starting to wonder if it's possible. What I am trying to do is create a home screen widget, that only contains an ImageButton. When it is pressed, the idea is to change some setting (like the wi-fi toggle) and then change the Buttons image. I have the ImageButton declared like this in my main.xml <ImageButton android:id="@+id/buttonOne" android:src="@drawable/button_normal_ringer" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> my AppWidgetProvider class, named ButtonWidget * note that the RemoteViews class is a locally stored variable. this allowed me to get access to the RViews layout elements... or so I thought. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { remoteViews = new RemoteViews(context.getPackageName(), R.layout.main); Intent active = new Intent(context, ButtonWidget.class); active.setAction(VIBRATE_UPDATE); active.putExtra("msg","TESTING"); PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.buttonOne, actionPendingIntent); appWidgetManager.updateAppWidget(appWidgetIds, remoteViews); } @Override public void onReceive(Context context, Intent intent) { // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); Log.d("onReceive",action); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { // check, if our Action was called if (intent.getAction().equals(VIBRATE_UPDATE)) { String msg = "null"; try { msg = intent.getStringExtra("msg"); } catch (NullPointerException e) { Log.e("Error", "msg = null"); } Log.d("onReceive",msg); if(remoteViews != null){ Log.d("onReceive",""+remoteViews.getLayoutId()); remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); Log.d("onReceive", "tried to switch"); } else{ Log.d("F!", "--naughty language used here!!!--"); } } super.onReceive(context, intent); } } so, I've been testing this and the onReceive method works great, I'm able to send notifications and all sorts of stuff (removed from code for ease of reading) the one thing I can't do is change any properties of the view elements. To try and fix this, I made RemoteViews a local and static private variable. Using log's I was able to see that When multiple instances of the app are on screen, they all refer to the one instance of RemoteViews. perfect for what I'm trying to do The trouble is in trying to change the image of the ImageButton. I can do this from within the onUpdate method using this. remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); that doesn't do me any good though once the widget is created. For some reason, even though its inside the same class, being inside the onReceive method makes that line not work. That line used to throw a Null pointer as a matter of fact, until I changed the variable to static. now it passes the null test, refers to the same layoutId as it did at the start, reads the line, but it does nothing. Its like the code isn't even there, just keeps chugging along. SO...... Is there any way to modify layout elements from within a widget after the widget has been created!? I want to do this based on the environment, not with a configuration activity launch. I've been looking at various questions and this seems to be an issue that really hasn't been solved, such as link text and link text oh and for anyone who finds this and wants a good starting tutorial for widgets, this is easy to follow (though a bit old, it gets you comfortable with widgets) .pdf link text hopefully someone can help here. I kinda have the feeling that this is illegal and there is a different way to go about this. I would LOVE to be told another approach!!!! Thanks

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Autopostback select lists in ASP.NET MVC using jQuery

    - by rajbk
    This tiny snippet of code show you how to have your select lists autopostback its containing form when the selected value changes. When the DOM is fully loaded, we get all select nodes that have an attribute of “data-autopostback” with a value of “true”. We wire up the “change” JavaScript event to all these select nodes. This event is fired as soon as the user changes their selection with the mouse.  When the event is fired, we find the closest form tag for the select node that raised the event and submit the form. $(document).ready(function () { $("select:[data-autopostback=true]").change(function () { $(this).closest("form").submit(); }); }); A select tag with autopostback enabled will look like this <select id="selCategory" name="Category" data-autopostback="true"> <option value='1'>Electronics</option> <option value='2'>Books</option> </select> The reason I am using “data-" suffix in the attribute is to be HTML5 Compliant. A custom data attribute is an attribute in no namespace whose name starts with the string "data-", has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z). The snippet can be used with any HTML page.

    Read the article

  • Windows Keys Extender – tool for XP/Vista users to using hotkeys Win+[Left|Right|Up|Down]

    - by outcoldman
    In Windows 7, I really liked an opportunity to change the position of the windows by pressing hotkeys Win + (Left | Right | Up | Bottom): Win + Left - window attached to the left side Win + Right - window attached to the right side Win + Up - window is maximized Win + Bottom - window in the normal state I’m talking about this: This is really useful hotkeys and really comfortable work with windows. But not all can use Windows 7 right now. When Windows7 was in beta and RC states I really wanted to use this features right now in current Windows version (I had Vista). So I spent my time and wrote this tool. In addition, the tool has functional which can change position of windows (It is useful for laptops, you can move windows with hotkeys). And of course this tool can move windows between monitors. Hot keys can be customized. Interface in English. Now I don’t want to add new functionality, because now I’m using Windows 7 with all functions that this tool have. I wrote this tool on C# with .NET 3.5. You can use this source code for knowing how to work with hotkeys with C#. At first I placed source code on Google Code and then placed it on CodePlex too. So you can download it from one of them. I would be glad if someone will use it. :)

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Platform Builder: Cloning – the Linker is your Friend

    - by Bruce Eitman
    I was tasked this week with making a minor change to NetMsgBox() behavior. NetMsgBox() is a little function in NETUI that handles MessageBox() for the Network User Interface.  The obvious solution is to clone the entire NETUI directory from Public\Common\Oak\Drivers (see Platform Builder: Clone Public Code for more on cloning). If you haven’t already, take a minute to look in that folder. There are a lot of files in the folder, but I only needed to modify one function in one of those files. There must be a better way. Enter the linker. Instead of cloning the entire folder, here is what I did: Create a new folder in my Platform named NETUI (but the name isn’t important) Copy the C file that I needed to modify to the new folder, in this case netui.c Copy a makefile from one of the other folder (really they are all the same) Run Sysgen_capture Open a build window (see Platform Builder: Build Tools, Opening a Build Window) Change directories to the new folder Run “Sysgen_capture netui” Rename sources.netui to sources Add the C file to sources as SOURCES=netui.c Modify the code Build the code Done That is it, the functions from my new folder now replace the functions from the Public code and link with the rest to create NETUI.dll. There is a catches. If you remove any of the functions from the C file, linking will fail because the remaining functions will be found twice.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Database version control resources

    - by Wes McClure
    In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS.  This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence.  In the next few posts I will try to explain how I used these to create TSqlMigrations.   Blogs entries Three rules for database work - K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx Versioning databases - the baseline http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx Versioning databases - change scripts http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx Versioning databases - views, stored procedures and the like http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx Versioning databases - branching and merging http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx Evolutionary Database Design - Martin Fowler http://martinfowler.com/articles/evodb.html Are database migration frameworks worth the effort? - Good challenges http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx Continuous Integration (in general) http://martinfowler.com/articles/continuousIntegration.html http://martinfowler.com/articles/originalContinuousIntegration.html Is Your Database Under Version Control? http://www.codinghorror.com/blog/archives/000743.html 11 Tools for Database Versioning http://secretgeek.net/dbcontrol.asp How to do database source control and builds http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html .Net Database Migration Tool Roundup http://flux88.com/blog/net-database-migration-tool-roundup/ Books Book Description Refactoring Databases: Evolutionary Database Design Martin Fowler signature series on refactoring databases. Book site: http://databaserefactoring.com/ Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut) A good question/answer layout of common problems and solutions with database version control. http://www.informit.com/store/product.aspx?isbn=032150206X

    Read the article

  • Database platform migration from Windows-32bit to Linux-64bit

    - by [email protected]
    We have a customer which have all they core business database on RAC over Windows OS. Last year they were affected by a virus that destroyed the registry and all their RAC environments were "OUT OF ORDER", the result, thousand people on vacation for a day.They were distrustful about Linux and after came an agreement to migrate their Enterprise Manager from Windows to Linux (OMS and Repository). How we did demonstrate how powerful and easy is RMAN to migrate databases across platforms.Fist of check of target platform is available from sourceSQL> select platform_name from v$db_transportable_platform;PLATFORM_NAME-----------------------------------------------------------Microsoft Windows IA (32-bit)Linux IA (32-bit)HP Tru64 UNIXLinux IA (64-bit)HP Open VMSMicrosoft Windows IA (64-bit)Linux 64-bit for AMDMicrosoft Windows 64-bit for AMDSolaris Operating System (x86)Check database object as directories that can change across platforms, also check external tables.Startup source database in read only modeRun the following RMAN ScriptRMAN> connect target / RMAN> convert database on target platform convert script 'c:/temp/convert_grid.rman'transport script 'c:/TEMP/transporta_grid.sql' new database 'gridbd' format 'c:/temp/gridmydb%U' db_file_name_convert 'C:\oracle\oradata\grid','/oracle/gridbd/data2/data';(Notice tha path change on db_file_name_convert)Move from source to target:PfileNew scriptsexternal table filesbfilesdata filesCheck pfile, and ensure that the paths are OKCreate temporary control file to connect rmanExecute the RMAN scriptRMAN> connect target / RMAN> @/home/oracle/pboixeda/win2lnx.rmanShutdown the instance and remove temporary control filesRecreate controlfile/s, take care about the used paths.Execute the transport script, transporta_grid.sqlDue we were moving from a 32-bit architecture to a 64-bit architecture, there is bug reported in 386990.1 note, we had to recreate OLAP , check the note for more details. Alter or Recreate all necessary objects Launch utlrpAfter this experience with Linux they are on the way to migrate all their RAC from 10gR2 on Windows to 11gR2 Linux 64 bit.Hope it helps

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >