Search Results

Search found 654 results on 27 pages for 'falcon eyes'.

Page 24/27 | < Previous Page | 20 21 22 23 24 25 26 27  | Next Page >

  • Becoming a professional programmer / software engineer

    - by Matt
    This isn't strictly about programming, more about being a programmer, so I'm sorry if its not the right kind of question to ask on this forum (mod, please delete if it isn't) I'm a computer tech in the US Army, and once I'm out I'll have eight years on the job. I'm about to start a degree through an online school (the only way I can get the army to pay for it while I'm still in), and I'm seriously looking at getting a computer science degree. I'm great with computers. I can take one apart and put it back together with my eyes closed. I'm A+ and Network+ certified and I'm getting a couple other CompTIA certs before I get out. I can work Windows as well as anyone on this planet and I'm not terrible with Linux. A job in computers is something I've always wanted. But, aside from being a computer technician, it seems that every job in the field requires programming ability. I like programming as a hobby. I programmed TI BASIC in high school and I'm teaching myself Python, but that's as far as my experience goes. That sort of brings me to my questions: I've always heard that the first language is the most difficult, and once you learn it well then all the others sort of fall into place for you. Is that true? Like, if I spend the next eight months mastering Python, will I pretty much be able to pick up at least fair proficiency in any other OO language within a month of studying it or whatever? How easy is it to burn out? the biggest thing I'm afraid of is just burning out on programming. I can go all day long if I'm programming strictly for my own personal desire, but I can imagine it being really easy to burn out after a few years of programming to deadlines and certain specifications. Especially if its a big project involving a dozen different designers. From what I told you about myself, would I already be qualified to work as a regular technician (geek squad type or maybe running a computer repair shop). Is Python a good base to learn from? I've heard that it makes you hate other languages because they feel more convoluted when learning, but also that its a great beginner language. If you're a professional programmer, did you have any of the same fears? Would you recommend that I stick to computer repair and Python rather than try to get into corporate programming? (just from what you've read in this thread, anyway) Thanks for taking the time to read all this and answer (if you did)

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • Objective-C: properties not being saved or passed

    - by Gerald Yeo
    Hi, i'm a newbie to iphone development. I'm doing a navigation-based app, and I'm having trouble passing values to a new view. @interface RootViewController : UITableViewController { NSString *imgurl; NSMutableArray *galleryArray; } @property (nonatomic, retain) NSString *imgurl; @property (nonatomic, retain) NSMutableArray *galleryArray; - (void)showAll; @end #import "RootViewController.h" #import "ScrollView.h" #import "Model.h" #import "JSON/JSON.h" @implementation RootViewController @synthesize galleryArray, imgurl; - (void)viewDidLoad { UIBarButtonItem *showButton = [[[UIBarButtonItem alloc] initWithTitle:NSLocalizedString(@"Show All", @"") style:UIBarButtonItemStyleBordered target:self action:@selector(showAll)] autorelease]; self.navigationItem.rightBarButtonItem = showButton; NSString *jsonString = [[Model sharedInstance] jsonFromURLString:@"http://www.ddbstaging.com/gerald/gallery.php"]; NSDictionary *resultDictionary = [jsonString JSONValue]; if (resultDictionary == nil) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Webservice Down" message:@"The webservice you are accessing is currently down. Please try again later." delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; } else { galleryArray = [[NSMutableArray alloc] init]; galleryArray = [resultDictionary valueForKey:@"gallery"]; imgurl = (NSString *)[galleryArray objectAtIndex:0]; NSLog(@" -> %@", galleryArray); NSLog(imgurl); } } - (void)showAll { NSLog(@" -> %@", galleryArray); NSLog(imgurl); ScrollView *controller = [[ScrollView alloc] initWithJSON:galleryArray]; [self.navigationController pushViewController:controller animated:YES]; } The RootViewController startup and the json data loads up fine. I can see it from the first console trace. However, once I click on the Show All button, the app crashes. It doesn't even trace the galleryArray and imgurl properyly. Maybe additional pairs of eyes can spot my mistakes. Any help is greatly appreciated! [Session started at 2010-05-08 16:16:07 +0800.] 2010-05-08 16:16:07.242 Photos[5892:20b] -> ( ) GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 5892. (gdb)

    Read the article

  • Bullet indents in PowerPoint 2007 compatibility mode via .NET interop issue

    - by L. Shaydariv
    Hello. I've got a really difficult bug and I can't see the fix. The subject drives me insane for real for a long time. Let's consider the following scenario: 1) There is a PowerPoint 2003 presentation. It contains the only slide and the only shape, but the shape contains a text frame including a bulleted list with a random textual representation structure. 2) There is a requirement to get bullet indents for every bulletted paragraph using PowerPoint 2007. I can satisfy the requirement opening the presentation in the compatibility mode and applying the following VBA script: With ActivePresentation Dim sl As Slide: Set sl = .Slides(1) Dim sh As Shape: Set sh = sl.Shapes(1) Dim i As Integer For i = 1 To sh.TextFrame.TextRange.Paragraphs.Count Dim para As TextRange: Set para = sh.TextFrame.TextRange.Paragraphs(i, 1) Debug.Print para.Text; para.indentLevel, sh.TextFrame.Ruler.Levels(para.indentLevel).FirstMargin Next i End With that produces the following output: A 1 0 B 1 0 C 2 24 D 3 60 E 5 132 Obviously, everything is perfect indeed: it has shown the proper list item text, list item level and its bullet indent. But I can't see the way of how I can reach the same result using C#. Let's add a COM-reference to Microsoft.Office.Interop.PowerPoint 2.9.0.0 (taken from MSPPT.OLB, MS Office 12): // presentation = ...("presentation.ppt")... // a PowerPoint 2003 presentation Slide slide = presentation.Slides[1]; Shape shape = slide.Shapes[1]; for (int i = 1; i<=shape.TextFrame.TextRange.Paragraphs(-1, -1).Count; i++) { TextRange paragraph = shape.TextFrame.TextRange.Paragraphs(i, 1); Console.WriteLine("{0} {1} {2}", paragraph.Text, paragraph.IndentLevel, shape.TextFrame.Ruler.Levels[paragraph.IndentLevel].FirstMargin); } Oh, man... What's it? I've got problems here. First, the paragraph.Text value is trimmed until the '\r' character is found (however paragraph.Text[0] really returns the first character O_o). But it's ok, I can shut my eyes to this. But... But, second, I can't understand why the first margins are always zero and it does not matter which level they belong to. They are always zero in the compatibility mode... It's hard to believe it... :) So is there any way to fix it or just to find a workaround? I'd like to accept any help regarding to the solution of the subject. I can't even find any article related to the issue. :( Probably you have ever been face to face with it... Or is it just a bug with no fix and must it be reported to Microsoft? Thanks you.

    Read the article

  • Annoying Captcha >> How to programm a form that can SMELL difference between human and robot?

    - by Sam
    Hi folks. On the comment of my old form needing a CAPTHA, I felt I share my problem, perhaps you recognize it and find its time we had better solutions: FACTUAL PROBLEM I know most of my clients (typical age= 40~60) hate CAPTCHA things. Now, I myself always feel like a robot, when I have to sueeze my eyes and fill in the strange letters from the Capcha... Sometimes I fail! Go back etc. Turnoff. I mean comon its 2011, shouldnt the forms have better A.I. by now? MY NEW IDEA (please dont laugh) Ive thought about it and this is my idea's to tell difference between human and robot: My idea is to give credibility points. 100 points = human 0% = robot. require real human mouse movements require mousemovements that dont follow any mathematical pattern require non-instantaneous reading delays, between load and first input in form when typing in form, delays are measured between letters and words approve as human when typical human behaviour measured (deleting, rephrasing etc) dont allow instant pasting or all fields give points for real keyboard pressures retract points for credibility when hyperlinks in form Test wether fake email field (invisible by human) is populated (suggested by Tomalak) when more than 75% human cretibility, allow to be sent without captcha when less than 25% human crecibility, force captcha puzzle to be sure Could we write a A.I. PHP that replaces the human-annoying capthas, meanwhile stopping most spamservers filing in the data? Not only for the fun of it, but also actually to provide a 99% better alternative than CAPTHCA's. Imagine the userfriendlyness of your forms! Your site distinguishing itself from others, showing your audience your sites KNOWS the difference between a robot and a human. Imagine the advangage. I am trying to capture the essense of that distinguishing edge. PROGRAMMING QUESTION: 1) Are such things possible to programm? 2) If so how would you start such programm? 3) Are there already very good working solutions available elsewhere? 4) If it isn't so hard, your are welcome to share your answer/solutions below. 5) upon completion of hints and new ideas, could this page be the start of a new AI captcha, OR should I forget about it and just go with the flow, forget about the whole AI dream, and use captcha like everyone else.

    Read the article

  • Choosing the right .NET architecture. WCF? WPF/Forms, ASP.NET (MVC)?

    - by Tommy Jakobsen
    I’m in the situation that I have to design and implement a rather big system from the bottom. I’m having some (a lot actually) questions about the architecture that I would like your comments and thoughts on. I don’t hope that I’ve written too much here, but I wanted to give you all an idea of what the system is. Quick info about the applications, read if you want: I can’t share much detail about the project, but basically it’s a system where we offer our customer a service to manage their users. We have a hotline where the users call and our hotline uses an (windows) application (intranet) to manage the user’s data, etc. The customer also has a web application where they can see reports, information about their business and users, and the ability to modify their data. Modifying data is not just user data like address and so, but also information about the products/services the user has, which can be complicated. The applications will be built on Microsoft .NET Framework 4, with a MS SQL Server 2008 database. There will be a few applications that have to access this database, such as: Intranet application (used by us and our hotline) Customer web application type 1 Customer web application type 2 Customer web application type n different applications) … Now my big problem is what .NET parts I should use for such a system. For the “backend” I’ve considered using Windows Communication Foundation: Would WCF be a good choice? The intranet application will be an application that has to edit lots of records in the database. It has to be easy to navigate using the keyboard (fast to work with). Has a feature such as “find customer, find that, lookup this, choose this and update that”. What would be the best choice to develop this application in? Would it be WPF or good old Windows Forms? I don’t need all of the fancy graphics features in WPF, like 3D, but the application has to look nice (maybe something like the new Visual Studio/Office tools). And the same question goes for the web pages. They have much the same work to do, but not as many features as the intranet application, and not the same amount of data (much less). That is my questions for now. I’m hoping to get a discussion going that will open my eyes to some of these technologies, helping me decide architecture to go with. I would like to say thanks in advance, and let you all know that any thoughts will be much appreciated.

    Read the article

  • asp.net menu control css for child items

    - by Andres
    I have an asp.net menu control which the child items(submenu) width is tied to its parent's width, I was wondering is there a work around? because some of the titles for the submenu are longer than the title of the parent so it looks all smooshed together and just horrible on the eyes. Any help is much appreciated. :) .net control: <asp:Menu ID="navigation" runat="server" Orientation="Horizontal" CssClass="topmenu" MaximumDynamicDisplayLevels="20" IncludeStyleBlock="false"> <DynamicSelectedStyle /> <DynamicMenuItemStyle /> <DynamicHoverStyle /> <DynamicMenuStyle /> <StaticMenuItemStyle /> <StaticSelectedStyle /> <StaticHoverStyle /> </asp:Menu> html rendered: <div class="topmenu" id="navigation"> <ul class="level1"> <li><a class="popout level1" href="dashboard.aspx?option=1">Seguridad</a> <ul class="level2"> <li><a class="level2" href="security/users.aspx?option=15">Usuarios</a></li> <li><a class="level2" href="security/profiles.aspx?option=16">Perfiles</a></li> <li><a class="level2" href="security/options.aspx?option=17">Opciones</a></li> <li><a class="level2" href="security/actions.aspx?option=18">Acciones</a></li> </ul> </li> </ul> </div> css: div.topmenu{} div.topmenu ul { list-style:none; padding:5px 0; margin:0; background: #0b2e56; } div.topmenu ul li { float:left; padding:10px; color: #fff; height:16px; z-index:9999; margin:0; } div.topmenu ul li a, div.menu ul li a:visited{ color: #fff; } div.topmenu ul li a:hover{ color:#fff; } div.topmenu ul li a:active{color:#fff; } thats what I have and the styling works i just need help in getting submenus to expand if they are bigger than main title. Thanks in advance!

    Read the article

  • Flash AS3 button eventlistener array bug

    - by Tyler Pepper
    Hi there, this is my first time posting a question here. I have an array of 12 buttons on a timeline that when first visiting that part of the timeline, get a CLICK eventlistener added to them using a for loop. All of them work perfectly at that point. When you click one it plays a frame label inside the specific movieClip and reveals a bio on the corresponding person with a close button and removes the CLICK eventlisteners for each button, again using a for loop. The close button plays a closing animation, and then the timeline goes back to the first frame (the one with the 12 buttons on it) and the CLICK eventlisteners are re-added, but now only the first 9 buttons of the array work. There are no output errors and the code to re-add the eventlisteners is exactly the same as the first time that works. I am completely at a loss and am wondering if anyone else has run into this problem. All of my buttons are named correctly, there are absolutely no output errors (I've used the debug module) and I made sure the array with the buttons in it is outputting all 12 at the moment the close button is clicked to add the eventlisteners back. for (var q = 0; q < ackBoDBtnArray.length; q++){ contentArea_mc.acknowledgements_mc.BoD_mc[ackBoDBtnArray[q]].addEventListener(MouseEvent.CLICK, showBio); } private function showBio(eo:MouseEvent):void { trace("show the bio"); bodVar = ackBoDBtnArray.getIndex(eo.target.name); contentArea_mc.acknowledgements_mc.BoD_mc.gotoAndPlay(ackBoDPgArray[bodVar]); contentArea_mc.acknowledgements_mc.BoD_mc.closeBio_btn.addEventListener(MouseEvent.CLICK, hideBio); for (var r = 0; r < ackBoDBtnArray.length; r++){ contentArea_mc.acknowledgements_mc.BoD_mc[ackBoDBtnArray[r]].mouseEnabled = false; contentArea_mc.acknowledgements_mc.BoD_mc[ackBoDBtnArray[r]].removeEventListener(MouseEvent.CLICK, showBio); } } private function hideBio(eo:MouseEvent):void { trace("hide it!"); contentArea_mc.acknowledgements_mc.BoD_mc.closeBio_btn.removeEventListener(MouseEvent.CLICK, hideBio); contentArea_mc.acknowledgements_mc.BoD_mc.gotoAndPlay(ackBoDClosePgArray[bodVar]); for (var s = 0; s < ackBoDBtnArray.length; s++){ trace(ackBoDBtnArray[s]); contentArea_mc.acknowledgements_mc.BoD_mc[ackBoDBtnArray[s]].mouseEnabled = true; contentArea_mc.acknowledgements_mc.BoD_mc[ackBoDBtnArray[s]].addEventListener(MouseEvent.CLICK, showBio); } Thanks in advance for any help and insight you can provide...I have a slight feeling that its something that may be obvious to another set of eyes...haha.

    Read the article

  • Calling a function that resides in the main page from a plugin?

    - by Justin Lee
    I want to call a function from within plugin, but the function is on the main page and not the plugin's .js file. EDIT I have jQuery parsing a very large XML file and building, subsequently, a large list (1.1 MB HTML file when dynamic content is copied, pasted, then saved) that has expand/collapse functionality through a plugin. The overall performance on IE is super slow and doggy, assuming since the page/DOM is so big. I am currently trying to save the collapsed content in the event.data when it is collapsed and remove it from the DOM, then bring it back when it is told to expand... the issue that I am having is that when I bring the content back, obviously the "click" and "hover" events are gone. I'm trying to re-assign them, currently doing so inside the plugin after the plugin expands the content. The issue then though is that is says the function that I declare within the .click() is not defined. Also the hover event doesn't seem to be re-assigning either.... if ($(event.data.trigger).attr('class').indexOf('collapsed') != -1 ) { // if expanding // console.log(event.data.targetContent); $(event.data.trigger).after(event.data.targetContent); $(event.data.target).hide(); /* This Line --->*/ $(event.data.target + 'a.addButton').click(addResourceToList); $(event.data.target + 'li.resource') .hover( function() { if (!($(this).attr("disabled"))) { $(this).addClass("over"); $(this).find("a").css({'display':'block'}); } }, function () { if (!($(this).attr("disabled"))) { $(this).removeClass("over"); $(this).children("a").css({'display':'none'}); } } ); $(event.data.target).css({ "height": "0px", "padding-top": "0px", "padding-bottom": "0px", "margin-top": "0px", "margin-bottom": "0px"}); $(event.data.target).show(); $(event.data.target).animate({ height: event.data.heightVal + "px", paddingTop: event.data.topPaddingVal + "px", paddingBottom: event.data.bottomPaddingVal + "px", marginTop: event.data.topMarginVal + "px", marginBottom: event.data.bottomMarginVal + "px"}, "normal");//, function(){$(this).hide();}); $(event.data.trigger).removeClass("collapsed"); $.cookies.set('jcollapserSub_' + event.data.target, 'expanded', {hoursToLive: 24 * 365}); } else if ($(event.data.trigger).attr('class').indexOf('collapsed') == -1 ) { // if collapsing $(event.data.target).animate({ height: "0px", paddingTop: "0px", paddingBottom: "0px", marginTop: "0px", marginBottom: "0px"}, "normal", function(){$(this).hide();$(this).remove();}); $(event.data.trigger).addClass("collapsed"); $.cookies.set('jcollapserSub_' + event.data.target, 'collapsed', {hoursToLive: 24 * 365}); } EDIT So, having new eyes truly makes a difference. As I was reviewing the code in this post this morning after being away over the weekend, I found where I had err'd. This: $(event.data.target + 'a.addButton').click(addResourceToList); Should be this (notice the space before a.addbutton): $(event.data.target + ' a.addButton').click(addResourceToList); Same issue with the "li.resource". So it was never pointing to the right elements... Thank you, Rene, for your help!!

    Read the article

  • jquery use of :last and val()

    - by dole doug
    I'm trying to run the code from http://jsfiddle.net/ddole/AC5mP/13/ on my machine and the approach I've use is below or here. Do you know why that code doesn't work on my machine. Firebug doesn't help me and I can't solve the problem. I think that I need another pair of eyes :((( <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>jQuery UI Dialog - Modal form</title> <link type="text/css" href="css/ui-lightness/jquery-ui-1.8.21.custom.css" rel="stylesheet" /> <script type="text/javascript" src="js/jquery-1.7.2.min.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.21.custom.min.js"></script> <script type="text/javascript" > jQuery(function($) { $('.helpDialog').hide(); $('.helpButton').each(function() { $.data(this, 'dialog', $(this).next('.helpDialog').dialog({ autoOpen: false, modal: true, width: 300, height: 250, buttons: { "Save": function() { alert($('.helpText:last').val()); $(this).dialog( "close" ); }, Cancel: function() { $(this).dialog( "close" ); } } }) ); }).click(function() { $.data(this, 'dialog').dialog('open'); return false; }); }); </script> </head> <body> <span class="helpButton">Button</span> <div class="helpDialog"> <input type="text" class="helpText" /> </div> <span class="helpButton">Button 2</span> <div class="helpDialog"> <input type="text" class="helpText" /> </div> <span class="helpButton">Button 3</span> <div class="helpDialog"> <input type="text" class="helpText" /> </div> <span class="helpButton">Button 4</span> <div class="helpDialog"> <input type="text" class="helpText" /> </div> <span class="helpButton">Button 5</span> <div class="helpDialog"> <input type="text" class="helpText" /> </div> </body>

    Read the article

  • Rails controller not rendering correct view when form is force-submitted by Javascript

    - by whazzmaster
    I'm using Rails with jQuery, and I'm working on a page for a simple site that prints each record to a table. The only editable field for each record is a checkbox. My goal is that every time a checkbox is changed, an ajax request updates that boolean attribute for the record (i.e., no submit button). My view code: <td> <% form_remote_tag :url => admin_update_path, :html => { :id => "form#{lead.id}" } do %> <%= hidden_field :lead, :id, :value => lead.id %> <%= check_box :lead, :contacted, :id => "checkbox"+lead.id.to_s, :checked => lead.contacted, :onchange => "$('#form#{lead.id}').submit();" %> <% end %> </td> In my routes.rb, admin_update_path is defined by map.admin_update 'update', :controller => "admin", :action => "update", :method => :post I also have an RJS template to render back an update. The contents of this file is currently just for testing (I just wanted to see if it worked, this will not be the ultimate functionality on a successful save)... page << "$('#checkbox#{@lead.id}').hide();" When clicked, the ajax request is successfully sent, with the correct params, and the action on the controller can retrieve the record and update it just fine. The problem is that it doesn't send back the JS; it changes the page in the browser and renders the generated Javascript as plain text rather than executing it in-place. Rails does some behind-the-scenes stuff to figure out if the incoming request is an ajax call, and I can't figure out why it's interpreting the incoming request as a regular web request as opposed to an ajax request. I may be missing something extremely simple here, but I've kind-of burned myself out looking so I thought I'd ask for another pair of eyes. Thanks in advance for any info!

    Read the article

  • C# Different class objects in one list

    - by jeah_wicer
    I have looked around some now to find a solution to this problem. I found several ways that could solve it but to be honest I didn't realize which of the ways that would be considered the "right" C# or OOP way of solving it. My goal is not only to solve the problems but also to develop a good set of code standards and I'm fairly sure there's a standard way to handle this problem. Let's say I have 2 types of printer hardwares with their respective classes and ways of communicating: PrinterType1, PrinterType2. I would also like to be able to later on add another type if neccessary. One step up in abstraction those have much in common. It should be possible to send a string to each one of them as an example. They both have variables in common and variables unique to each class. (One for instance communicates via COM-port and has such an object, while the other one communicates via TCP and has such an object). I would however like to just implement a List of all those printers and be able to go through the list and perform things as "Send(string message)" on all Printers regardless of type. I would also like to access variables like "PrinterList[0].Name" that are the same for both objects, however I would also at some places like to access data that is specific to the object itself (For instance in the settings window of the application where the COM-port name is set for one object and the IP/port number for another). So, in short something like: In common: Name Send() Specific to PrinterType1: Port Specific to PrinterType2: IP And I wish to, for instance, do Send() on all objects regardless of type and the number of objects present. I've read about polymorphism, Generics, interfaces and such, but I would like to know how this, in my eyes basic, problem typically would be dealt with in C# (and/or OOP in general). I actually did try to make a base class, but it didn't quite seem right to me. For instance I have no use of a "string Send(string Message)" function in the base class itself. So why would I define one there that needs to be overridden in the derived classes when I would never use the function in the base class ever in the first place? I'm really thankful for any answers. People around here seem very knowledgeable and this place has provided me with many solutions earlier. Now I finally have an account to answer and vote with too. EDIT: To additionally explain, I would also like to be able to access the objects of the actual printertype. For instance the Port variable in PrinterType1 which is a SerialPort object. I would like to access it like: PrinterList[0].Port.Open() and have access to the full range of functionality of the underlaying port. At the same time I would like to call generic functions that work in the same way for the different objects (but with different implementations): foreach (printer in Printers) printer.Send(message)

    Read the article

  • Dhcpd Daemon is trying to lease itself?

    - by tommieb75
    I have a Slackware Linux 13.0 box with two interfaces, eth0 and eth1. I have set this box up to be on the 192.168.1.0/24 network, with subnet mask of 255.255.255.0. I am trying to run a dhcpd server on this box to service two interfaces above, so I subnetted the 192.168.1.0/24 network into two subnets. For eth0 192.168.1.1, subnet mask 255.255.255.128, broadcast mask 192.168.1.127. For eth1 192.168.1.129, subnet mask 255.255.255.128, broadcast mask 192.168.1.255. Both the interfaces are assigned manually. eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.1 Bcast:192.168.1.127 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1404 (1.3 KiB) Interrupt:11 Base address:0x8000 Memory:faffc000-faffcfff eth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10003 errors:0 dropped:0 overruns:0 frame:0 TX packets:13286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1589229 (1.5 MiB) TX bytes:9900005 (9.4 MiB) Interrupt:11 Here is the dhcpd.conf set up authoritative; ddns-update-style interim; ignore client-updates; subnet 192.168.1.0 netmask 255.255.255.128 { range 192.168.1.2 192.168.1.126; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.127; option subnet-mask 255.255.255.128; } subnet 192.168.1.128 netmask 255.255.255.128 { range 192.168.1.129 192.168.1.254; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.255; option subnet-mask 255.255.255.128; } This is what is showing in the log Apr 10 18:09:58 inspiron8600 dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:09:58 inspiron8600 dhcpd: DHCPOFFER on 192.168.1.131 to 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:10:01 inspiron8600 dhcpcd[3832]: eth1: adding IP address 169.254.153.6/16 This is happening spuriously, and the log gets filled up with nonsense..so my question is this: How do I stop this from happening? And why would it be trying to give itself a lease? I am sure I have missed something but cannot see it and would appreciate a pair of eyes from the community to spot the obvious flaw!

    Read the article

  • Scan a Windows PC for Viruses from a Ubuntu Live CD

    - by Trevor Bekolay
    Getting a virus is bad. Getting a virus that causes your computer to crash when you reboot is even worse. We’ll show you how to clean viruses from your computer even if you can’t boot into Windows by using a virus scanner in a Ubuntu Live CD. There are a number of virus scanners available for Ubuntu, but we’ve found that avast! is the best choice, with great detection rates and usability. Unfortunately, avast! does not have a proper 64-bit version, and forcing the install does not work properly. If you want to use avast! to scan for viruses, then ensure that you have a 32-bit Ubuntu Live CD. If you currently have a 64-bit Ubuntu Live CD on a bootable flash drive, it does not take long to wipe your flash drive and go through our guide again and select normal (32-bit) Ubuntu 9.10 instead of the x64 edition. For the purposes of fixing your Windows installation, the 64-bit Live CD will not provide any benefits. Once Ubuntu 9.10 boots up, open up Firefox by clicking on its icon in the top panel. Navigate to http://www.avast.com/linux-home-edition. Click on the Download tab, and then click on the link to download the DEB package. Save it to the default location. While avast! is downloading, click on the link to the registration form on the download page. Fill in the registration form if you do not already have a trial license for avast!. By the time you’ve filled out the registration form, avast! will hopefully be finished downloading. Open a terminal window by clicking on Applications in the top-left corner of the screen, then expanding the Accessories menu and clicking on Terminal. In the terminal window, type in the following commands, pressing enter after each line. cd Downloadssudo dpkg –i avast* This will install avast! on the live Ubuntu environment. To ensure that you can use the latest virus database, while still in the terminal window, type in the following command: sudo sysctl –w kernel.shmmax=128000000 Now we’re ready to open avast!. Click on Applications on the top-left corner of the screen, expand the Accessories folder, and click on the new avast! Antivirus item. You will first be greeted with a window that asks for your license key. Hopefully you’ve received it in your email by now; open the email that avast! sends you, copy the license key, and paste it in the Registration window. avast! Antivirus will open. You’ll notice that the virus database is outdated. Click on the Update database button and avast! will start downloading the latest virus database. To scan your Windows hard drive, you will need to “mount” it. While the virus database is downloading, click on Places on the top-left of your screen, and click on your Windows hard drive, if you can tell which one it is by its size. If you can’t tell which is the correct hard drive, then click on Computer and check out each hard drive until you find the right one. When you find it, make a note of the drive’s label, which appears in the menu bar of the file browser. Also note that your hard drive will now appear on your desktop. By now, your virus database should be updated. At the time this article was written, the most recent version was 100404-0. In the main avast! window, click on the radio button next to Selected folders and then click on the “+” button to the right of the list box. It will open up a dialog box to browse to a location. To find your Windows hard drive, click on the “>” next to the computer icon. In the expanded list, find the folder labelled “media” and click on the “>” next to it to expand it. In this list, you should be able to find the label that corresponds to your Windows hard drive. If you want to scan a certain folder, then you can go further into this hierarchy and select that folder. However, we will scan the entire hard drive, so we’ll just press OK. Click on Start scan and avast! will start scanning your hard drive. If a virus is found, you’ll be prompted to select an action. If you know that the file is a virus, then you can Delete it, but there is the possibility of false positives, so you can also choose Move to chest to quarantine it. When avast! is done scanning, it will summarize what it found on your hard drive. You can take different actions on those files at this time by right-clicking on them and selecting the appropriate action. When you’re done, click Close. Your Windows PC is now free of viruses, in the eyes of avast!. Reboot your computer and with any luck it will now boot up! Alternatives to avast! If avast! and a liberal amount of Googling doesn’t fix your problem, it’s possible that a different virus scanner will fix your obscure issue. Here are a list of other virus scanners available for Ubuntu that are either free or offer free trials. See their support forums for help on installing these virus scanners. Avira AntiVir Personal for Linux / Solaris Panda Antivirus for Linux Installation and usage guide from Ubuntu F-PROT Antivirus for Linux ClamAV installation and usage guide from Ubuntu NOD32 Antivirus for Linux Kaspersky Anti-Virus 2010 Bitdefender Antivirus for Unices Conclusion Running avast! from a Ubuntu Live CD can clean the vast majority of viruses from your Windows PC. This is another reason to always have a Ubuntu Live CD ready just in case something happens to your Windows installation! Similar Articles Productive Geek Tips Secure Computing: Windows Live OneCareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareUse the Windows Key for the "Start" Menu in Ubuntu LinuxScan Files for Viruses Before You Download With Dr.WebAsk the Readers: Share Your Tips for Defeating Viruses and Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • Week in Geek: 4chan Falls Victim to DDoS Attack Edition

    - by Asian Angel
    This week we learned how to tweak the low battery action on a Windows 7 laptop, access an eBook collection anywhere in the world, “extend iPad battery life, batch resize photos, & sync massive music collections”, went on a reign of destruction with Snow Crusher, and had fun decorating our desktops with abstract icon collections. Photo by pasukaru76. Random Geek Links We have included extra news article goodness to help you catch up on any developments that you may have missed during the holiday break this past week. Note: The three 27C3 articles listed here represent three different presentations at the 27th Chaos Communication Congress hacker conference. 4chan victim of DDoS as FBI investigates role in PayPal attack Users of 4chan may have gotten a taste of their own medicine after the site was knocked offline by a DDoS attack from an unknown origin early Thursday morning. Report: FBI seizes server in probe of WikiLeaks attacks The FBI has seized a server in Texas as part of its hunt for the groups behind the pro-WikiLeaks denial-of-service attacks launched in December against PayPal, Visa, MasterCard, and others. Mozilla exposes older user-account database Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server. Data breach affects 4.9 million Honda customers Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked. Chinese Trojan discovered in Android games An Android-based Trojan called “Geinimi” has been discovered in the wild and the Trojan is capable of sending personal information to remote servers and exhibits botnet-like behavior. 27C3 presentation claims many mobiles vulnerable to SMS attacks According to security experts, an ‘SMS of death’ threatens to disable many current Sony Ericsson, Samsung, Motorola, Micromax and LG mobiles. 27C3: GSM cell phones even easier to tap Security researchers have demonstrated how open source software on a number of revamped, entry-level cell phones can decrypt and record mobile phone calls in the GSM network. 27C3: danger lurks in PDF documents Security researcher Julia Wolf has pointed out numerous, previously hardly known, security problems in connection with Adobe’s PDF standard. Critical update for WordPress A critical update has been made available for WordPress in the form of version 3.0.4. The update fixes a security bug in WordPress’s KSES library. McAfee Labs Predicts Geolocation, Mobile Devices and Apple Will Top the List of Targets for Emerging Threats in 2011 The list comprises 2010’s most buzzed about platforms and services, including Google’s Android, Apple’s iPhone, foursquare, Google TV and the Mac OS X platform, which are all expected to become major targets for cybercriminals. McAfee Labs also predicts that politically motivated attacks will be on the rise. Windows Phone 7 piracy materializes with FreeMarketplace A proof-of-concept application, FreeMarketplace, that allows any Windows Phone 7 application to be downloaded and installed free of charge has been developed. Empty email accounts, and some bad buzz for Hotmail In the past few days, a number of Hotmail users have been complaining about a rather disconcerting issue: their Hotmail accounts, some up to 10 years old, appear completely empty.  No emails, no folders, nothing, just what appears to be a new account. Reports: Nintendo warns of 3DS risk for kids Nintendo has reportedly issued a warning that the 3DS, its eagerly awaited glasses-free 3D portable gaming device, should not be used by children under 6 when the gadget is in 3D-viewing mode. Google eyes ‘cloaking’ as next antispam target Google plans to take a closer look at the practice of “cloaking,” or presenting one look to a Googlebot crawling one’s site while presenting another look to users. Facebook, Twitter stock trading drawing SEC eye? The high degree of investor interest in shares of hot Silicon Valley companies that aren’t yet publicly traded–like Facebook, Twitter, LinkedIn, and Zynga–may be leading to scrutiny from the U.S. Securities and Exchange Commission (SEC). Random TinyHacker Links Photo by jcraveiro. Exciting Software Set for Release in 2011 A few bloggers from great websites such as How-To Geek, Guiding Tech and 7 Tutorials took the time to sit down and talk about their software wishes for 2011. Take the time to read it and share… Wikileaks Infopr0n An infographic detailing the quest to plug WikiLeaks. The New York Times Guide to Mobile Apps A growing collection of all mobile app coverage by the New York Times as well as lists of favorite apps from Times writers. 7,000,000,000 (Video) A fascinating look at the world’s population via National Geographic Magazine. Super User Questions Check out the great answers to these hot questions from Super User. How to use a Personal computer as a Linux web server for development purposes? How to link processing power of old computers together? Free virtualization tool for testing suspicious files? Why do some actions not work with Remote Desktop? What is the simplest way to send a large batch of pictures to a distant friend or colleague? How-To Geek Weekly Article Recap Had a busy week and need to get caught up on your HTG reading? Then sit back and relax while enjoying these hot posts full of how-to roundup goodness. The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best How-To Geek Linux Articles of 2010 How to Search Just the Site You’re Viewing Using Google Search Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud One Year Ago on How-To Geek Need more how-to geekiness for your weekend? Then look through this great batch of articles from one year ago that focus on dual-booting and O.S. installation goodness. Dual Boot Your Pre-Installed Windows 7 Computer with Vista Dual Boot Your Pre-Installed Windows 7 Computer with XP How To Setup a USB Flash Drive to Install Windows 7 Dual Boot Your Pre-Installed Windows 7 Computer with Ubuntu Easily Install Ubuntu Linux with Windows Using the Wubi Installer The Geek Note We hope that you and your families have had a terrific holiday break as everyone prepares to return to work and school this week. Remember to keep those great tips coming in to us at [email protected]! Photo by pjbeardsley. Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • HDMI video connection cuts top and bottom borders of screen

    - by Luis Alvarado
    Ok this is an extension of another problem I had with a VGA connection and an Nvidia Geforce GT 440 card. Here is goes the explanation of this particular problem: I have a Soneview 32' TV. This TV has many connections including VGA (First reason I bought it), HDMI (Second reason but did not have a HDMI cable at that time) and DVI. I have had this TV for little over a month now, actually I had it to celebrate the release of Ubuntu 11.10 and started using it exactly on that date (I know too much fan there but hey, I like geek stuff). I started using it with the VGA cable. After 2 weeks I bought an Nvidia GT440 card. The previous 9500GT was working correctly with no problems whatsoever. I installed the GT440 and the first problem that I encountered using this latest card is mentioned here: Nvidia GT 440 black screen problem when loading lightdm greeter. The solution to this problem was to actually disconnect then connect again the VGA cable. This would result in the screen showing me the lightdm screen for my login. If I did not disconnect then connect the cable I could be there forever thinking that there is no video signal. I got tired of looking for answers that did not work and for solutions that made me literally have to install Ubuntu again. I just went and bought a HDMI cable and changed the VGA one for that one. It worked and I did not have to disconnect/connect the cable but now I have this problem when using any resolution. My normal resolution is 1920x1080 (This TV is 1080HD) so in VGA I could use this resolution with no problem, but on HDMI am getting the borders cut out. Here is a pic: As you can see from the PIC, the Launcher icons only show less than 50% of their witdh. Forget about the top and bottom parts, I can access them with the mouse but I can not visualize them in the screen. It is like it's outside of the TVs view. Basically there is like 20 to 30 pixels gone from all sides. I searched around and came to running xrand --verbose to see what it could detect from the TV. I got this: cyrex@cyrex:~$ xrandr --verbose xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1920 x 1080, maximum 1920 x 1080 default connected 1920x1080+0+0 (0x164) normal (normal) 0mm x 0mm Identifier: 0x163 Timestamp: 465485 Subpixel: unknown Clones: CRTC: 0 CRTCs: 0 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: 1920x1080 (0x164) 103.7MHz *current h: width 1920 start 0 end 0 total 1920 skew 0 clock 54.0KHz v: height 1080 start 0 end 0 total 1080 clock 50.0Hz 1920x1080 (0x165) 105.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 55.1KHz v: height 1080 start 0 end 0 total 1080 clock 51.0Hz 1920x1080 (0x166) 107.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 56.2KHz v: height 1080 start 0 end 0 total 1080 clock 52.0Hz 1920x1080 (0x167) 109.9MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 57.2KHz v: height 1080 start 0 end 0 total 1080 clock 53.0Hz 1920x1080 (0x168) 112.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 58.3KHz v: height 1080 start 0 end 0 total 1080 clock 54.0Hz 1920x1080 (0x169) 114.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 59.4KHz v: height 1080 start 0 end 0 total 1080 clock 55.0Hz 1680x1050 (0x16a) 98.8MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 58.8KHz v: height 1050 start 0 end 0 total 1050 clock 56.0Hz 1680x1050 (0x16b) 100.5MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 59.9KHz v: height 1050 start 0 end 0 total 1050 clock 57.0Hz 1600x1024 (0x16c) 95.0MHz h: width 1600 start 0 end 0 total 1600 skew 0 clock 59.4KHz v: height 1024 start 0 end 0 total 1024 clock 58.0Hz 1440x900 (0x16d) 76.5MHz h: width 1440 start 0 end 0 total 1440 skew 0 clock 53.1KHz v: height 900 start 0 end 0 total 900 clock 59.0Hz 1360x768 (0x171) 65.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 48.4KHz v: height 768 start 0 end 0 total 768 clock 63.0Hz 1360x768 (0x172) 66.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 49.2KHz v: height 768 start 0 end 0 total 768 clock 64.0Hz 1280x1024 (0x173) 85.2MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.6KHz v: height 1024 start 0 end 0 total 1024 clock 65.0Hz 1280x960 (0x176) 83.6MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 65.3KHz v: height 960 start 0 end 0 total 960 clock 68.0Hz 1280x960 (0x177) 84.8MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.2KHz v: height 960 start 0 end 0 total 960 clock 69.0Hz 1280x720 (0x178) 64.5MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 50.4KHz v: height 720 start 0 end 0 total 720 clock 70.0Hz 1280x720 (0x179) 65.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.1KHz v: height 720 start 0 end 0 total 720 clock 71.0Hz 1280x720 (0x17a) 66.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.8KHz v: height 720 start 0 end 0 total 720 clock 72.0Hz 1152x864 (0x17b) 72.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.1KHz v: height 864 start 0 end 0 total 864 clock 73.0Hz 1152x864 (0x17c) 73.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.9KHz v: height 864 start 0 end 0 total 864 clock 74.0Hz ....Many Resolutions later... 320x200 (0x1d1) 10.2MHz h: width 320 start 0 end 0 total 320 skew 0 clock 31.8KHz v: height 200 start 0 end 0 total 200 clock 159.0Hz 320x175 (0x1d2) 9.0MHz h: width 320 start 0 end 0 total 320 skew 0 clock 28.0KHz v: height 175 start 0 end 0 total 175 clock 160.0Hz 1920x1080 (0x1dd) 333.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 173.9KHz v: height 1080 start 0 end 0 total 1080 clock 161.0Hz If it helps, the Refresh Rate at 1920x1080 is 60. There is a flickering effect at this resolution using HDMI but not VGA which I imagine is related to the borders cut off issue am asking here. I have also done the following but this will only solve the problem on lower resolutions than 1920x1080 or on others TV (My father has a Sony TV where this problem is also solved): NVIDIA WAY Go to Nvidia-Settings and there will be an option that will have more features if a HDMI cable is connected. In the next pic the option is DFP-1 (CNDLCD) but this name changes depending on what device the PC is connected to: Uncheck Force Full GPU Scaling What this will do for resolutions LOWER than 1920x1080 (At least in my case) is solve the flickering problem and fix the borders cut by the monitor. Save to Xorg.conf file the changes made after changing to a resolution acceptable to your eyes. TV WAY If you TV has OSD Menu and this menu has options for scanning the screen resolution or auto adjusting to it, disable them. Specifically the option about SCAN. If you have an option for AV Mode disable it. Basically disable any option that needs to scan and scale the resolution. Test one by one. In the case of my father's TV this did it. In my case, the Nvidia solved it for lower resolutions. NOTE: In the case this is not solved in the next couple of weeks I will add this as the answer but take into consideration that the issue is still active with 1920x1080 resolutions.

    Read the article

  • The hidden cost of interrupting knowledge workers

    - by Piet
    The November issue of pragpub has an interesting article on interruptions. The article is written by Brian Tarbox, who also mentions the article on his blog. I like the subtitle: ‘Simple Strategies for Avoiding Dumping Your Mental Stack’. Brian talks about the effective cost of interrupting a ‘knowledge worker’, often with trivial questions or distractions. In the eyes of the interruptor, the interruption only costs the time the interrupted had to listen to the question and give an answer. However, depending on what the interrupted was doing at the time, getting fully immersed in their task again might take up to 15-20 minutes. Enough interruptions might even cause a knowledge worker to mentally call it a day. According to this article interruptions can consume about 28% of a knowledge worker’s time, translating in a $588 billion loss for US companies each year. Looking for a new developer to join your team? Ever thought about optimizing your team’s environment and the way they work instead? Making non knowledge workers aware You can’t. Well, I haven’t succeeded yet. And believe me: I’ve tried. When you’ve got a simple way to really increase your productivity (’give me 2 hours of uninterrupted time a day’) it wouldn’t be right not to tell your boss or team-leader about it. The problem is: only productive knowledge workers seem to understand this. People who don’t fall into this category just seem to think you’re joking, being arrogant or anti-social when you tell them the interruptions can really have an impact on your productivity. Also, knowledge workers often work in a very concentrated mental state which is described here as: It is the same mindfulness as ecstatic lovemaking, the merging of two into a fluidly harmonious one. The hallmark of flow is a feeling of spontaneous joy, even rapture, while performing a task. Yes, coding can be addictive and if you’re interrupting a programmer at the wrong moment, you’re effectively bringing down a junkie from his high in just a few seconds. This can result in seemingly arrogant, almost aggressive reactions. How to make people aware of the production-cost they’re inflicting: I’ve been often pondering that question myself. The article suggests that solutions based on that question never seem to work. To be honest: I’ve never even been able to find a half decent solution for this question. People who are not in this situations just don’t understand the issue, no matter how you try to explain it. Fun (?) thing I’ve noticed: Programmers or IT people in general who don’t get this are often the kind of people who just don’t get anything done. Interrupt handling (interruption management?) IRL Have non-urgent questions handled in a non-interruptive way It helps a bit to educate people into using non-interruptive ways to ask questions: “duh, I have no idea, but I’m a bit busy here now could you put it in an email so I don’t forget?”. Eventually, a considerable amount of people will skip interrupting you and just send an email right away. Some stubborn-headed people however will continue to just interrupt you, saying “you’re 10 meters from my desk, why can’t we just talk?”. Just remember to disable your email notifications, it can be hard to resist opening your email client when you know a new email just arrived. Use Do Not Disturb signals When working in a group of programmers, often the unofficial sign you can only be interrupted for something important is to put on headphones. And when the environment is quiet enough, often people aren’t even listening to music. Otherwise music can help to block the indirect distractions (someone else talking on the phone or tapping their feet). You might get a “they’re all just surfing and listening to music”-reaction from outsiders though. Peopleware talks about a team where the no-interruption sign was placing a shawl on the desk. If I remember correctly, I am unable to locate my copy of this really excellent must-read book. If you have all standardized on the same IM tool, maybe that tool has a ‘do not disturb’ setting. Also some phone-systems have a ‘DND’ (do not disturb) setting. Hide Brian offers a number of good suggestions, some obvious like: hide away somewhere they can’t find you. Not sure how long it’ll be till someone thinks you’re just taking a nap somewhere though. Also, this often isn’t possible or your boss might not understand this. And if you really get caught taking a nap, make sure to explain that your were powernapping. Counter-act interruptions Another suggestion he offers is when you’re being interrupted to just hold up your hand, blocking the interruption, and at least giving you time to finish your sentence or your block/line of code. The last suggestion works more as a way to make it obvious to the interruptor that they really are interrupting your work and to offload some of the cost on the interruptor. In practice, this can also helps you cool down a bit so you don’t start saying nasty things to the interruptor. Unfortunately I’ve sometimes been confronted with people who just ignore this signal and keep talking, as if they’re sure that whatever they’ve got to say is really worth listening to and without a doubt more important than anything you might be doing. This behaviour usually leaves me speechless (not good when someone just asked a question). I’ve noticed that these people are usually also the first to complain when being interrupted themselves. They’re generally not very liked as colleagues, so try not to imitate their behaviour. TDD as a way to minimize recovery time I don’t like Test Driven Development. Mainly for only one reason: It interrupts flow. At least, that’s what it does for me, but maybe I’m just not grown used to TDD yet. BUT a positive effect TDD has on me when I have to work in an interruptive environment and can’t really get into the ‘flow’ (also supposedly called ‘the zone’ by software developers, although I’ve never heard it 1st hand), TDD helps me to concentrate on the tasks at hand and helps me to get back at work after an interruption. I feel when using TDD, I can get by without the need for being totally ‘in’ the project and I can be reasonably productive without obtaining ‘flow’. Do you have a suggestion on how to make people aware of the concept of ‘flow’ and the cost of interruptions? (without looking like an arrogant ass or a weirdo)

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • When OneTug Just Isn&rsquo;t Enough&hellip;

    - by onefloridacoder
    I stole that from the back of a T-shirt I saw at the Orlando Code Camp 2010.  This was my first code camp and my first time volunteering for an event like this as well.  It was an awesome day.  I cannot begin to count the “aaahh”, “I did-not-know I could do that”, in the crowds and for myself.  I think it was a great day of learning for everyone at all levels.  All of the presenters were different and provided great insights into the topics they were presenting.  Here’s a list of the ones that I attended. KodeFuGuru, “Pirates vs. Ninjas” He touched on many good topics to relax some of the ways we think when we are writing out code, and still looks good, readable, etc.  As he pointed out in all of his examples, we might not always realize everything that’s going on under the covers.  He exposed a bug in his own code, and verbalized the mental gymnastics he went through when he knew there was something wrong with one of his IEnumerable implementations.  For me, it was great to hear that someone else labors over these gut reactions to code quickly snapped together, to the point that we rush to the refactor stage to fix what’s bothering us – and learn.  He has some content on extension methods that was very interesting.  My “that is so cool” moment was when he swapped out AddEntity method on an entity class and used a With extension method instead.  Some of the LINQ scales fell off my eyes at that moment, and I realized my own code could be a lot more powerful (and readable) if incorporate a few of these examples at the appropriate times.  And he cautioned as well… “don’t go crazy with this stuff”, there’s a place and time for everything.  One of his examples demo’d toward the end of the talk is on his sight where he’s chaining methods together, cool stuff. Quotes I liked: “Extension Methods - Extension methods to put features back on the model type, without impacting the type.” “Favor Declarative Code” – Check out the ? and ?? operators if you’re not already using them. “Favor Fluent Code” “Avoid Pirate Ninja Zombies!  If you see one run!” I’m definitely going to be looking at “Extract Projection” when I get into VS2010. BDD 101 – Sean Chambers http://github.com/schambers This guy had a whole host of gremlins against him, final score Sean 5, Gremlins 1.  He ran the code samples from his github repo  in the code github code viewer since the PC they school gave him to use didn’t have VS installed. He did a great job of converting the grammar between BDD and TDD, and how this style of development can be used in integration tests as well as the different types of gated builds on a CI box – he didn’t go into a discussion around CI, but we could infer that it could work. Like when we use WSSF, it does cause a class explosion to happen however the amount of code per class it limit to just covering the concern at hand – no more, no less.  As in “When I as a <Role>, expect {something} to happen, because {}”  This keeps us (the developer) from gold plating our solutions and creating less waste.  He basically keeps the code that prove out the requirement to two lines of code.  Nice. He uses SpecUnit to merge this grammar into his .NET projects and gave an overview on how this ties into writing his own BDD tests.  Some folks were familiar with Given / When / Then as story acceptance criteria and here’s how he mapped it: “Given <Context>  When <Something Happens> Then <I expect...>”  There are a few base classes and overrides in the SpecUnit framework that help with setting up the context for each test which looked very handy. Successfully Running Your Own Coding Business The speaker ran through a list of items that sounded like common sense stuff LLC, banking, separating expenses, etc.  Then moved into role playing with business owners and an ISV.  That was pretty good stuff, it pays to be a good listener all of the time even if your client is sitting on the other side of the phone tearing you head off for you – but that’s all it is, and get used to it its par for the course.  Oh, yeah always answer the phone was one simple thing that you can do to move  your business forward.  But like Cory Foy tweeted this week, “If you owe me a lot of money, don’t have a message that says your away for five weeks skiing in Colorado.”  Lots of food for thought that’s on my list of “todo’s and to-don’ts”. Speaker Idol Next, I had the pleasure of helping Russ Fustino tape this part of Code Camp as my primary volunteer opportunity that day.  You remember Russ, “know the code” from the awesome Russ’ Tool Shed series.  He did a great job orchestrating and capturing the Speaker Idol finals.   So I didn’t actually miss any sessions, but was able to see three back to back in one setting.  The idol finalists gave a 10 minute talk and very deep subjects, but different styles of talks.  No one walked away empty handed for jobs very well done.  Russ has details on his site.  The pictures and  video captured is supposed to be published on Channel 9 at a later date.  It was also a valuable experience to see what makes technical speakers effective in their talks.  I picked up quite a few speaking tips from what I heard from the judges and contestants. Design For Developers – Diane Leeper If you are a great developer, you’re probably a lousy designer.  Diane didn’t come to poke holes in what we think we can do with UI layout and design, but she provided some tools we can use to figure out metaphors for visualizing data.  If you need help with that check out Silverlight Pivot – that’s what she was getting at.  I was first introduced to her at one of John Papa’s talks last year at a Lakeland User Group meeting and she’s very passionate about design.  She was able to discuss different elements of Pivot, while to a developer is just looked cool. I believe she was providing the deck from her talk to folks after her talk, so send her an email if you’re interested.   She says she can talk about design for hours and hours – we all left that session believing her.   Rinse and Repeat Orlando Code Camp 2010 was awesome, and would totally do it again.  There were lots of folks from my shop there, and some that have left my shop to go elsewhere.  So it was a reunion of sorts and a great celebration for the simple fact that its great to be a developer and there’s a community that supports and recognizes it as well.  The sponsors were generous and the organizers were very tired, namely Esteban Garcia and Will Strohl who were responsible for making a lot of this magic happen.  And if you don’t believe me, check out the chatter on Twitter.

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • My raycaster is putting out strange results, how do I fix it?

    - by JamesK89
    I'm working on a raycaster in ActionScript 3.0 for the fun of it, and as a learning experience. I've got it up and running and its displaying me output as expected however I'm getting this strange bug where rays go through corners of blocks and the edges of blocks appear through walls. Maybe somebody with more experience can point out what I'm doing wrong or maybe a fresh pair of eyes can spot a tiny bug I haven't noticed. Thank you so much for your help! Screenshots: http://i55.tinypic.com/25koebm.jpg http://i51.tinypic.com/zx5jq9.jpg Relevant code: function drawScene() { rays.graphics.clear(); rays.graphics.lineStyle(1, rgba(0x00,0x66,0x00)); var halfFov = (player.fov/2); var numRays:int = ( stage.stageWidth / COLUMN_SIZE ); var prjDist = ( stage.stageWidth / 2 ) / Math.tan(toRad( halfFov )); var angStep = ( player.fov / numRays ); for( var i:int = 0; i < numRays; i++ ) { var rAng = ( ( player.angle - halfFov ) + ( angStep * i ) ) % 360; if( rAng < 0 ) rAng += 360; var ray:Object = castRay(player.position, rAng); drawRaySlice(i*COLUMN_SIZE, prjDist, player.angle, ray); } } function drawRaySlice(sx:int, prjDist, angle, ray:Object) { if( ray.distance >= MAX_DIST ) return; var height:int = int(( TILE_SIZE / (ray.distance * Math.cos(toRad(angle-ray.angle))) ) * prjDist); if( !height ) return; var yTop = int(( stage.stageHeight / 2 ) - ( height / 2 )); if( yTop < 0 ) yTop = 0; var yBot = int(( stage.stageHeight / 2 ) + ( height / 2 )); if( yBot > stage.stageHeight ) yBot = stage.stageHeight; rays.graphics.moveTo( (ray.origin.x / TILE_SIZE) * MINI_SIZE, (ray.origin.y / TILE_SIZE) * MINI_SIZE ); rays.graphics.lineTo( (ray.hit.x / TILE_SIZE) * MINI_SIZE, (ray.hit.y / TILE_SIZE) * MINI_SIZE ); for( var x:int = 0; x < COLUMN_SIZE; x++ ) { for( var y:int = yTop; y < yBot; y++ ) { buffer.setPixel(sx+x, y, clrTable[ray.tile-1] >> ( ray.horz ? 1 : 0 )); } } } function castRay(origin:Point, angle):Object { // Return values var rTexel = 0; var rHorz = false; var rTile = 0; var rDist = MAX_DIST + 1; var rMap:Point = new Point(); var rHit:Point = new Point(); // Ray angle and slope var ra = toRad(angle) % ANGLE_360; if( ra < ANGLE_0 ) ra += ANGLE_360; var rs = Math.tan(ra); var rUp = ( ra > ANGLE_0 && ra < ANGLE_180 ); var rRight = ( ra < ANGLE_90 || ra > ANGLE_270 ); // Ray position var rx = 0; var ry = 0; // Ray step values var xa = 0; var ya = 0; // Ray position, in map coordinates var mx:int = 0; var my:int = 0; var mt:int = 0; // Distance var dx = 0; var dy = 0; var ds = MAX_DIST + 1; // Horizontal intersection if( ra != ANGLE_180 && ra != ANGLE_0 && ra != ANGLE_360 ) { ya = ( rUp ? TILE_SIZE : -TILE_SIZE ); xa = ya / rs; ry = int( origin.y / TILE_SIZE ) * ( TILE_SIZE ) + ( rUp ? TILE_SIZE : -1 ); rx = origin.x + ( ry - origin.y ) / rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = true; rTexel = int(rx % TILE_SIZE) } break; } rx += xa; ry += ya; } } // Vertical intersection if( ra != ANGLE_90 && ra != ANGLE_270 ) { xa = ( rRight ? TILE_SIZE : -TILE_SIZE ); ya = xa * rs; rx = int( origin.x / TILE_SIZE ) * ( TILE_SIZE ) + ( rRight ? TILE_SIZE : -1 ); ry = origin.y + ( rx - origin.x ) * rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = false; rTexel = int(ry % TILE_SIZE); } break; } rx += xa; ry += ya; } } return { angle: angle, distance: Math.sqrt(rDist), hit: rHit, map: rMap, tile: rTile, horz: rHorz, origin: origin, texel: rTexel }; }

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27  | Next Page >