Search Results

Search found 1432 results on 58 pages for 'jeff drew'.

Page 53/58 | < Previous Page | 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Recover data from hard drive with partitions (but not most data) overwritten

    - by Macha
    I have a 500GB hard drive I've been keeping around to recover data from that I removed from a failing NAS drive that got sort of... erratic at the end. I finally got rid of the NAS when during a firmware update it removed the partition table. Fast forward to a week ago, when I was building a new PC, and a mixup resulted in me placing the hard drive in question in the new PC and installing Windows XP on the first 100GB. I'm presuming any data on that first 100GB is now gone, but for the rest of it, is there any way I can recover it at home, as professional data recovery is currently too expensive? I have a blank 1TB HDD if I can store any images of that hard drive on. The problem was definitely with the NAS and not the hard drive, as the hard drive had a successful install of Windows when mistakenly place in the new PC, and there were capacitors in the NAS's circuitry clearly broken. The data I want to recover (in order of priority) is: High: Some jpgs of family photos. Medium: Some RAW files. (There are also jpg versions of all of these) Low: Some mp3s, avis and ISOs, I can re-rip most of these if need be, but it'd be handy not to have to. (I don't need a backup lecture, and if you can hold it in from nagging Jeff Atwood for it, you can hold it in from nagging me for it) In short: The partition tables are gone and overwritten. The data is not overwritten, except for an amount equal to the size of a Windows XP SP3 installation.

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • What can cause two "identical" setups of IE8 and XP to display things differently?

    - by ccornet
    This has been baffling me for a while. Of the machines I use in visitting Stack Overflow, two of them are machines with the same setup: Dell with Windows XP with IE8. Since they were issued to me by the same company (one to use in the office, one to use at home), they have identical setups as well. But they display certain page elements differently! One is an Optiplex GX620 desktop, the other is an Inspiron 9100 laptop, but somehow the hardware doesn't seem like something that should be overriding how my browser displays things. Nevertheless, the laptop seems to display things differently than what is expected. Differences have included the following: This issue persisted on the laptop after Jeff fixed it, but was repaired for everyone else and on the Desktop. When viewing Vote Counts on a post, the grey line is left immediately beneath the upvotes but a number-sized white space is below that before the downvotes. On the desktop, it displays properly with the two adjacent and divided by a grey line. Code blocks seem to have a blank line at the end on the laptop. The following image illustrates how the last two elements look on the laptop. So, considering that as far as I can tell, these two setups are identical (I have not messed with any settings and they were both initialized identically as well), what else could be causing the display difference?

    Read the article

  • How do I implement AABB ray cast hit checking for opengl es on the iPhone

    - by Big Fizzy
    Basically, I draw a 3D cube, I can spin it around but I want to be able to touch it and know where on my cube's surface the user touched. I'm using for setting up, generating and spinning. Its based on the Molecules code and NeHe tutorial #5. Any help, links, tutorials and code would be greatly appreciated. I have lots of development experience but nothing much in the way of openGL and 3d. // // GLViewController.h // NeHe Lesson 05 // // Created by Jeff LaMarche on 12/12/08. // Copyright Jeff LaMarche Consulting 2008. All rights reserved. // #import "GLViewController.h" #import "GLView.h" @implementation GLViewController - (void)drawBox { static const GLfloat cubeVertices[] = { -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f }; static const GLubyte cubeNumberOfIndices = 36; const GLubyte cubeVertexFaces[] = { 0, 1, 5, // Half of top face 0, 5, 4, // Other half of top face 4, 6, 5, // Half of front face 4, 6, 7, // Other half of front face 0, 1, 2, // Half of back face 0, 3, 2, // Other half of back face 1, 2, 5, // Half of right face 2, 5, 6, // Other half of right face 0, 3, 4, // Half of left face 7, 4, 3, // Other half of left face 3, 6, 2, // Half of bottom face 6, 7, 3, // Other half of bottom face }; const GLubyte cubeFaceColors[] = { 0, 255, 0, 255, 255, 125, 0, 255, 255, 0, 0, 255, 255, 255, 0, 255, 0, 0, 255, 255, 255, 0, 255, 255 }; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, cubeVertices); int colorIndex = 0; for(int i = 0; i < cubeNumberOfIndices; i += 3) { glColor4ub(cubeFaceColors[colorIndex], cubeFaceColors[colorIndex+1], cubeFaceColors[colorIndex+2], cubeFaceColors[colorIndex+3]); int face = (i / 3.0); if (face%2 != 0.0) colorIndex+=4; glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, &cubeVertexFaces[i]); } glDisableClientState(GL_VERTEX_ARRAY); } //move this to a data model later! - (GLfixed)floatToFixed:(GLfloat)aValue; { return (GLfixed) (aValue * 65536.0f); } - (void)drawViewByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleFactor translationInX:(float)xTranslation translationInY:(float)yTranslation view:(GLView*)view; { glMatrixMode(GL_MODELVIEW); GLfixed currentModelViewMatrix[16] = { 45146, 47441, 2485, 0, -25149, 26775,-54274, 0, -40303, 36435, 36650, 0, 0, 0, 0, 65536 }; /* GLfixed currentModelViewMatrix[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65536 }; */ //glLoadIdentity(); //glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 4.0f); // Reset rotation system if (isFirstDrawing) { //glLoadIdentity(); glMultMatrixx(currentModelViewMatrix); [self configureLighting]; isFirstDrawing = NO; } // Scale the view to fit current multitouch scaling GLfixed fixedPointScaleFactor = [self floatToFixed:scaleFactor]; glScalex(fixedPointScaleFactor, fixedPointScaleFactor, fixedPointScaleFactor); // Perform incremental rotation based on current angles in X and Y glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation); glRotatex([self floatToFixed:totalRotation], (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8]) ); // Translate the model by the accumulated amount glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); float currentScaleFactor = sqrt(pow((GLfloat)currentModelViewMatrix[0] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[1] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[2] / 65536.0f, 2.0f)); xTranslation = xTranslation / (currentScaleFactor * currentScaleFactor); yTranslation = yTranslation / (currentScaleFactor * currentScaleFactor); // Grab the current model matrix, and use the (0,4,8) components to figure the eye's X axis in the model coordinate system, translate along that glTranslatef(xTranslation * (GLfloat)currentModelViewMatrix[0] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[4] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[8] / 65536.0f); // Grab the current model matrix, and use the (1,5,9) components to figure the eye's Y axis in the model coordinate system, translate along that glTranslatef(yTranslation * (GLfloat)currentModelViewMatrix[1] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[5] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[9] / 65536.0f); // Black background, with depth buffer enabled glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); [self drawBox]; } - (void)configureLighting; { const GLfixed lightAmbient[] = {13107, 13107, 13107, 65535}; const GLfixed lightDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed matAmbient[] = {65535, 65535, 65535, 65535}; const GLfixed matDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed lightPosition[] = {30535, -30535, 0, 0}; const GLfixed lightShininess = 20; glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); glMaterialxv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialxv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialx(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightxv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightxv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightxv(GL_LIGHT0, GL_POSITION, lightPosition); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.1, zFar = 1000.0, fieldOfView = 60.0; GLfloat size; glMatrixMode(GL_PROJECTION); glEnable(GL_DEPTH_TEST); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glScissor(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glTranslatef(0.0f, 0.0f, -6.0f); isFirstDrawing = YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • SOA, Empowerment and Continuous Improvement

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick will be hosting the IT Leader Editorial on a regular basis. I met my twin at Open World. We share backgrounds, experiences and even names. I hosted an invitation-only AppAdvantage Leadership Forum with an overcapacity 85 participants: 55 customers, 15 from the Oracle AppAdvantage team and 15 Partners. It was a lively, open and positive discussion of pace layered architectures and Oracle’s AppAdvantage approach to a unified view of Applications and Middleware. Rick Hassman from Pella was one of the customer panelists and during the pre event prep, Rick and I shared backgrounds and found that we had both been plant managers and led ERP deployments prior to leading IT itself. During the panel conversation I explored this with him, discussing the unique perspectives that this provides to CIO’s. He then hit on a point that I wasn’t able to fully appreciate until a week later. First though, some background. The week after the Forum, one of the participants emailed me with the following thoughts: “I am 150% behind this concept……but we are struggling with the concept of web services and the potential use of the Oracle Service Bus technology let alone moving into using the full SOA/BPM/BAM software to extend our JD Edwards application to both integrate and support business processes”. After thinking a bit I responded this way: While I certainly appreciate the degree of change and effort involved, perhaps I could offer the following: One of the underlying principles behind Oracle AppAdvantage is that more often than not, the choice between changing a business process and invasively customizing ERP represents a Hobson's Choice: neither is acceptable. In this case the third option, moving the process out of ERP, is the only acceptable one. Providing this choice typically requires end to end, real time interoperability across applications and/or services. This real time interoperability, to be sustainable over time requires a service oriented architecture. There's just no way around this. SOA adaptation is admittedly tough at the beginning. New skills, new technology and new headaches. But, like any radically new technology, it has a learning curve that drives cost down rather dramatically over time. Tough choices to be sure, but not entirely different than we face with every major technology cycle. Good points of course, but I felt that something was missing. The points were convincing, perhaps even a bit insightful, but they didn’t get at the heart of what Oracle AppAdvantage is focused upon: how the optimization of technology, applications, processes and relationships can change the very way that organizations operate. And then I thought back to the panel discussion with Rick Hassman at Oracle OpenWorld. Rick stressed that Continuous Improvement is a fundamental business strategy at Pella. I remember Continuous Improvement well as I suspect does everyone who was in American manufacturing during the 80’s. Pioneered by W. Edwards Deming in Japan (and still known alternatively as Kaizen), Continuous Improvement sets in place the business culture that we must not become complacent with success and resistant to the ongoing need for change. Many believe that this single handedly drove the renaissance in American manufacturing through the last two decades, which had become complacent during the 70’s and early 80’s. But what exactly does this have to do with SOA? It was Rick’s next point. He drew the connection that moving those business processes that need to continually change over time out of ERP and into edge applications and services enables continuous improvement by empowering people to continually strive for better ways of doing things rather than be being bound by workflows that cannot change. A compelling connection: that SOA, and the overall Oracle AppAdvantage framework of which it is an integral part, can empower people towards continuous improvement in business processes and as a result drive business leadership and business excellence. What better a case for technology innovation?

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • Superpower Your Touchpad Computer with Scrybe

    - by Matthew Guay
    Are you looking for a way to help your Touchpad computer make you more productive?  Here’s a quick look at Scrybe, a new application from Synaptics that lets you superpower it. Touchpad devices have become increasingly more interesting as they’ve included support for multi-touch gestures.  Scrybe takes it to the next level and lets you use your touchpad as an application launcher.  You can launch any application, website, or complete many common commands on your computer with a simple gesture.  Scrybe works with most modern Synaptics touchpads, which are standard on most laptops and netbooks.  It is optimized for newer multi-touch touchpads, but can also work with standard single-touch touchpads.  It works on Windows 7, Vista, and XP, so chances are it will work with your laptop or netbook. Get Started With Scrybe Head over to the Scrybe website and download the latest version (link below).  You are asked to enter your email address, name, and information about your computer…but you actually only have to enter your email address.  Click Download when finished. Run the installer when it’s download.  It will automatically download the latest Synaptics driver for your touchpad and any other components needed for Scrybe.  Note that the Scrybe installer will ask to install the Yahoo! toolbar, so uncheck this to avoid adding this worthless browser toolbar. Using Scrybe To open an application or website with a gesture, press 3 fingers on your touchpad at once, or if your touchpad doesn’t support multi-touch gestures, then press Ctrl+Alt and press 1 finger on your touchpad.  This will open the Scrype input pane; start drawing a gesture, and you’ll see it on the grey square.  The input pane shows some default gestures you can try. Here we drew an “M”, which opens our default Music player.  As soon as you finish the gesture and lift up your finger, Scrybe will open the application or website you selected. A notification balloon will let you know what gesture was preformed. When you’re entering your gesture, the input pane will show white “ink”.  The “ink” will turn blue if the command is recognized, but will turn red if it isn’t.  If Scrybe doesn’t recognize your command, press 3 fingers and try again. Scrybe Control Panel You can open the Scrybe Control panel to enter or change commands by entering a box-like gesture, or right-clicking the Scrybe icon in your system tray and selecting “Scrybe Control Panel”. Scrybe has many pre-configured gestures that you can preview and even practice. All of the gestures in the Popular tab are preset and cannot be changed.  However, the ones in the favorites tab can be edited.  Select the gesture you wish to edit, and click the gear icon to change it.  Here we changed the email gesture to open Hotmail instead of the default Yahoo Mail. Scrybe can also help you perform many common Windows commands such as Copy and Undo.  Select the Tools tab to see all of these commands.   Scrybe has many settings you may wish to change.  Select the Preferences button in the Control Panel to change these.  Here’s some of the settings we changed. Uncheck “Display a message” to turn off the tooltip notifications when you enter a gesture Uncheck “Show symbol hints” to turn off the sidebar on the input pane Select the search engine you want to open with the Search Gesture.  The default is Yahoo, but you can choose your favorite. Adding a new Scrybe Gesture The default Scrybe options are useful, but the best part is that you can assign gestures to your own programs or websites.  Open the Scrybe control panel, and click the plus sign on the bottom left corner.  Enter a name for your gesture, and then choose if it is for a website or an application. If you want the gesture to open a website, enter the address in the box. Alternately, if you want your gesture to open an application, select Launch Application and then either enter the path to the application, or click the button beside the Launch field and browse to it. Now click the down arrow on the blue box and choose one of the gestures for your application or website. Your new gesture will show up under the Favorites tab in the Scrybe control panel, and you can use it whenever you want from Scrybe, or practice the gesture by selecting the Practice button. Conclusion If you enjoy multi-touch gestures, you may find Scrybe very useful on your laptop or netbook.  Scrybe recognizes gestures fairly easily, even if you don’t enter them perfectly correctly.  Just like pinch-to-zoom and two-finger scroll, Scrybe can quickly become something you miss on other laptops. Download Scrybe (registration required) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadRemove Synaptics Touchpad Icon from System TrayRoll Back Troublesome Device Drivers in Windows VistaChange Your Computer Name in Windows 7 or VistaLet Somebody Use Your Computer Without Logging Off in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Day 4 - Game Sprites In Action

    - by dapostolov
    Yesterday I drew an image on the screen. Most exciting, but ... I spent more time blogging about it then actual coding. So this next little while I'm going to streamline my game and research and simply post key notes. Quick notes on the last session: The most important thing I wanted to point out were the following methods:           spriteBatch.Begin(SpriteBlendMode.AlphaBlend);           spriteBatch.Draw(sprite, position, Color.White);           spriteBatch.End(); The spriteBatch object is used to draw Textures and a 2D texture is called a Sprite A texture is generally an image, which is called an Asset in XNA The Draw Method in the Game1.cs is looped (until exit) and utilises the spriteBatch object to draw a Scene To begin drawing a Scene you call the Begin Method. To end a Scene you call the End Method. And to place an image on the Scene you call the Draw method. The most simple implementation of the draw method is:           spriteBatch.Draw(sprite, position, Color.White); 1) sprite - the 2D texture you loaded to draw 2) position - the 2d vector, a set of x & y coordinates 3) Color.White - the tint to apply to the texture, in this case, white light = nothing, nada, no tint. Game Sprites In Action! Today, I played around with Draw methods to get comfortable with their "quirks". The following is an example of the above draw method, but with more parameters available for us to use. Let's investigate!             spriteBatch.Draw(sprite, position2, null, Color.White, MathHelper.ToRadians(45.0f), new Vector2(sprite.Width / 2, sprite.Height / 2), 1.0F, SpriteEffects.None, 0.0F); The parameters (in order): 1) sprite  the texture to display 2) position2 the position on the screen / scene this can also be a rectangle 3) null the portion of the image to display within an image null = display full image this is generally used for animation strips / grids (more on this below) 4) Color.White Texture tinting White = no tint 5) MathHelper.ToRadians(45.0f) rotation of the object, in this case 45 degrees rotates from the set plotting point. 6) new Vector(0,0) the plotting point in this case the top left corner the image will rotate from the top left of the texture in the code above, the point is set to the middle of the image. 7) 1.0f Image scaling (1x) 8) SpriteEffects.None you can flip the image horizontally or vertically 9) 0.0f The z index of the image. 0 = closer, 1 behind? And playing around with different combinations I was able to come up with the following whacky display:   Checking off Yesterdays Intention List: learn game development terminology (in progress) - We learned sprite, scene, texture, and asset. how to place and position (rotate) a static image on the screen (completed) - The thing to note was, it's was in radians and I found a cool helper method to convert degrees into radians. Also, the image rotates from it's specified point. how to layer static images on the screen (completed) - I couldn't seem to get the zIndex working, but one things for sure, the order you draw the image in also determines how it is rendered on the screen. understand image scaling (in progress) - I'm not sure I have this fully covered, but for the most part plug a number in the scaling field and the image grows / shrinks accordingly. can we reuse images? (completed) - yes, I loaded one image and plotted the bugger all over the screen. understand how framerate is handled in XNA (in progress) - I hacked together some code to display the framerate each second. A framerate of 60 appears to be the standard. Interesting to note, the GameTime object does provide you with some cool timing capabilities, such as...is the game running slow? Need to investigate this down the road. how to display text , basic shapes, and colors on the screen (in progress) - i got text rendered on the screen, and i understand containing rectangles. However, I didn't display "shapes" & "colors" how to interact with an image (collision of user input?) (todo) how to animate an image and understand basic animation techniques (in progress) - I was able to create a stripe animation of numbers ranging from 1 - 4, each block was 40 x 40 pixles for a total stripe size of 160 x 40. Using the portion (source Rectangle) parameter, i limited this display to each section at varying intervals. It was interesting to note my first implementation animated at rocket speed. I then tried to create a smoother animation by limiting the redraw capacity, which seemed to work. I guess a little more research will have to be put into this for animating characters / scenes. how to detect colliding images or screen edges (todo) - but the rectangle object can detect collisions I believe. how to manipulate the image, lets say colors, stretching (in progress) - I haven't figured out how to modify a specific color to be another color, but the tinting parameter definately could be used. As for stretching, use the rectangle object as the positioning and the image will stretch to fit! how to focus on a segment of an image...like only displaying a frame on a film reel (completed) - as per basic animation techniques what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) (todo) Tomorrows Intention Tomorrow I am going to take a stab at rendering a game menu and from there I'm going to investigate how I can improve upon the code and techniques. Intention List: Render a menu, fancy or not Show the mouse cursor Hook up click event A basic animation of somesort Investigate image / menu techniques D.

    Read the article

  • Kendo UI Mobile with Knockout for Master-Detail Views

    - by Steve Michelotti
    Lately I’ve been playing with Kendo UI Mobile to build iPhone apps. It’s similar to jQuery Mobile in that they are both HTML5/JavaScript based frameworks for buildings mobile apps. The primary thing that drew me to investigate Kendo UI was its innate ability to adaptively render a native looking app based on detecting the device it’s currently running on. In other words, it will render to look like a native iPhone app if it’s running on an iPhone and it will render to look like a native Droid app if it’s running on a Droid. This is in contrast to jQuery Mobile which looks the same on all devices and, therefore, it can never quite look native for whatever device it’s running on. My first impressions of Kendo UI were great. Using HTML5 data-* attributes to define “roles” for UI elements is easy, the rendering looked great, and the basic navigation was simple and intuitive. However, I ran into major confusion when trying to figure out how to “correctly” build master-detail views. Since I was already very family with KnockoutJS, I set out to use that framework in conjunction with Kendo UI Mobile to build the following simple scenario: I wanted to have a simple “Task Manager” application where my first screen just showed a list of tasks like this:   Then clicking on a specific task would navigate to a detail screen that would show all details of the specific task that was selected:   Basic navigation between views in Kendo UI is simple. The href of an <a> tag just needs to specify a hash tag followed by the ID of the view to navigate to as shown in this jsFiddle (notice the href of the <a> tag matches the id of the second view):   Direct link to jsFiddle: here. That is all well and good but the problem I encountered was: how to pass data between the views? Specifically, I need the detail view to display all the details of whichever task was selected. If I was doing this with my typical technique with KnockoutJS, I know exactly what I would do. First I would create a view model that had my collection of tasks and a property for the currently selected task like this: 1: function ViewModel() { 2: var self = this; 3: self.tasks = ko.observableArray(data); 4: self.selectedTask = ko.observable(null); 5: } Then I would bind my list of tasks to the unordered list - I would attach a “click” handler to each item (each <li> in the unordered list) so that it would select the “selectedTask” for the view model. The problem I found is this approach simply wouldn’t work for Kendo UI Mobile. It completely ignored the click handlers that I was trying to attach to the <a> tags – it just wanted to look at the href (at least that’s what I observed). But if I can’t intercept this, then *how* can I pass data or any context to the next view? The only thing I was able to find in the Kendo documentation is that you can pass query string arguments on the view name you’re specifying in the href. This enabled me to do the following: Specify the task ID in each href – something like this: <a href=”#taskDetail?id=3></a> Attach an “init method” (via the “data-show” attribute on the details view) that runs whenever the view is activated Inside this “init method”, grab the task ID passed from the query string to look up the item from my view model’s list of tasks in order to set the selected task I was able to get all that working with about 20 lines of JavaScript as shown in this jsFiddle. If you click on the Results tab, you can navigate between views and see the the detail screen is correctly binding to the selected item:   Direct link to jsFiddle: here.   With all that being done, I was very happy to get it working with the behavior I wanted. However, I have no idea if that is the “correct” way to do it or if there is a “better” way to do it. I know that Kendo UI comes with its own data binding framework but my preference is to be able to use (the well-documented) KnockoutJS since I’m already familiar with that framework rather than having to learn yet another new framework. While I think my solution above is probably “acceptable”, there are still a couple of things that bug me about it. First, it seems odd that I have to loop through my items to *find* my selected item based on the ID that was passed on the query string - normally, with Knockout I can just refer directly to my selected item from where it was used. Second, it didn’t feel exactly right that I had to rely on the “data-show” method of the details view to set my context – normally with Knockout, I could just attach a click handler to the <a> tag that was actually clicked by the user in order to set the “selected item.” I’m not sure if I’m being too picky. I know there are many people that have *way* more expertise in Kendo UI compared to me – I’d be curious to know if there are better ways to achieve the same results.

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • 2D Selective Gaussian Blur

    - by Joshua Thomas
    I am attempting to use Gaussian blur on a 2D platform game, selectively blurring specific types of platforms with different amounts. I am currently just messing around with simple test code, trying to get it to work correctly. What I need to eventually do is create three separate render targets, leave one normal, blur one slightly, and blur the last heavily, then recombine on the screen. Where I am now is I have successfully drawn into a new render target and performed the gaussian blur on it, but when I draw it back to the screen everything is purple aside from the platforms I drew to the target. This is my .fx file: #define RADIUS 7 #define KERNEL_SIZE (RADIUS * 2 + 1) //----------------------------------------------------------------------------- // Globals. //----------------------------------------------------------------------------- float weights[KERNEL_SIZE]; float2 offsets[KERNEL_SIZE]; //----------------------------------------------------------------------------- // Textures. //----------------------------------------------------------------------------- texture colorMapTexture; sampler2D colorMap = sampler_state { Texture = <colorMapTexture>; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; //----------------------------------------------------------------------------- // Pixel Shaders. //----------------------------------------------------------------------------- float4 PS_GaussianBlur(float2 texCoord : TEXCOORD) : COLOR0 { float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f); for (int i = 0; i < KERNEL_SIZE; ++i) color += tex2D(colorMap, texCoord + offsets[i]) * weights[i]; return color; } //----------------------------------------------------------------------------- // Techniques. //----------------------------------------------------------------------------- technique GaussianBlur { pass { PixelShader = compile ps_2_0 PS_GaussianBlur(); } } This is the code I'm using for the gaussian blur: public Texture2D PerformGaussianBlur(Texture2D srcTexture, RenderTarget2D renderTarget1, RenderTarget2D renderTarget2, SpriteBatch spriteBatch) { if (effect == null) throw new InvalidOperationException("GaussianBlur.fx effect not loaded."); Texture2D outputTexture = null; Rectangle srcRect = new Rectangle(0, 0, srcTexture.Width, srcTexture.Height); Rectangle destRect1 = new Rectangle(0, 0, renderTarget1.Width, renderTarget1.Height); Rectangle destRect2 = new Rectangle(0, 0, renderTarget2.Width, renderTarget2.Height); // Perform horizontal Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget1); effect.CurrentTechnique = effect.Techniques["GaussianBlur"]; effect.Parameters["weights"].SetValue(kernel); effect.Parameters["colorMapTexture"].SetValue(srcTexture); effect.Parameters["offsets"].SetValue(offsetsHoriz); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(srcTexture, destRect1, Color.White); spriteBatch.End(); // Perform vertical Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget2); outputTexture = (Texture2D)renderTarget1; effect.Parameters["colorMapTexture"].SetValue(outputTexture); effect.Parameters["offsets"].SetValue(offsetsVert); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(outputTexture, destRect2, Color.White); spriteBatch.End(); // Return the Gaussian blurred texture. game.GraphicsDevice.SetRenderTarget(null); outputTexture = (Texture2D)renderTarget2; return outputTexture; } And this is the draw method affected: public void Draw(SpriteBatch spriteBatch) { device.SetRenderTarget(maxBlur); spriteBatch.Begin(); foreach (Brick brick in blueBricks) brick.Draw(spriteBatch); spriteBatch.End(); blue = gBlur.PerformGaussianBlur((Texture2D) maxBlur, helperTarget, maxBlur, spriteBatch); spriteBatch.Begin(); device.SetRenderTarget(null); foreach (Brick brick in redBricks) brick.Draw(spriteBatch); foreach (Brick brick in greenBricks) brick.Draw(spriteBatch); spriteBatch.Draw(blue, new Rectangle(0, 0, blue.Width, blue.Height), Color.White); foreach (Brick brick in purpleBricks) brick.Draw(spriteBatch); spriteBatch.End(); } I'm sorry about the massive brick of text and images(or not....new user, I tried, it said no), but I wanted to get my problem across clearly as I have been searching for an answer to this for quite a while now. As a side note, I have seen the bloom sample. Very well commented, but overly complicated since it deals in 3D; I was unable to take what I needed to learn form it. Thanks for any and all help.

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Mac - Flash file not loaded in independent flash player

    - by Mugdha
    Hi, I am working on an independent application to play flash files on Mac. I have already done the same for Linux, and it works flawlessly but on mac for some reason flash is not drawing to my window. It is not throwing any kind of error too. I am using Flash player 10, that would mean that I am using the Core Graphics drawing model. I am able to send mouse events to flash and wrote a sample plugin to check if there was a problem in the context that I was sending, but my sample plugin draws properly to the window. I am getting a call for NPN_InvalidateRect twice and as a response I send an update Event back to flash. I drew a dummy rectangle to check that my context is correct. I have flipped the context to make the origin as top left corner. On doing right click on the debug version of the flash player it shows the following message: "Movie not loaded..." Can anyone give me any idea why is the content not being drawn? I would really appreciate the help, as I have been struggling with it for more than a month now. Here is a small log of the interaction that I have with flash: NPN_UserAgent Called NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVSupportsWindowless; return true NPN_SetValue Called for Variable - NPPVpluginTransparentBool; return true NPN_GetValue Called with variable NPNVsupportsCoreGraphicsBool; return true NPN_SetValue Called for Variable - NPNVpluginDrawingModel NPP_SetWindow (CoreGraphics): 0, window=0xebaa90, context=0xe4c930, window.x:0 window.y:22 window.width:480 window.height:270 NPP_HandleEvent(activateEvent) accepted:0 isActive: 1 NPP_HandleEvent(updateEvt) accepted: 1 NPN_UserAgent Called NPN_GetURLNotify Called with URL - javascript:top.location+"flashplugin_unique" NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPP_NewStream URL=/Users/mjain/Desktop/clock.swf MIME=application/x-shockwave-flash error=0 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPN_InvalidateRect Called NPP_Write responseURL=/Users/mjain/Desktop/clock.swf bytes=9925 total-delivered=9925/9925 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPP_DestroyStream responseURL=/Users/mjain/Desktop/clock.swf error=0 NPP_HandleEvent(updateEvt) accepted: 1 NPN_InvalidateRect Called NPP_HandleEvent(updateEvt) accepted: 1 NPP_NewStream URL=javascript:top.location+"flashplugin_unique" MIME=text/plain error=0 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPN_UserAgent Called NPP_Write responseURL=javascript:top.location+"flashplugin_unique" bytes=52 total-delivered=52/52 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPP_DestroyStream responseURL=javascript:top.location+"flashplugin_unique" error=0 NPP_URLNotify responseURL=javascript:top.location+"flashplugin_unique" reason=0 Thanks Mugdha.

    Read the article

  • Is Appcelerator Titanium now banned on the iPhone?

    - by altuzar
    This question has been answered quite clearly for MonoTouch here: http://stackoverflow.com/questions/2604033/is-monotouch-now-banned-on-the-iphone But what about Appcelerator Titanium? The new TOS from Apple and their iPhone 4 OS: 3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited). Titanium uses JavaScript but is not executed be the iPhone OS WebKit engine directly. In their Developer blog, Jeff Haynie says Titanium is on the clear, but I don't know if they are in denial. It’s our belief that we are fully in compliance with iPhone OS 4.0 ToS as we interpret them. I haven't found any official word by Apple, only opinions. And I'm quite confussed. I'm not writing another line of code for my App until... you know.

    Read the article

  • How can I do something ~after~ an event has fired in C#?

    - by Siracuse
    I'm using the following project to handle global keyboard and mouse hooking in my C# application. This project is basically a wrapper around the Win API call SetWindowsHookEx using either the WH_MOUSE_LL or WH_KEYBOARD_LL constants. It also manages certain state and generally makes this kind of hooking pretty pain free. I'm using this for a mouse gesture recognition software I'm working on. Basically, I have it setup so it detects when a global hotkey is pressed down (say CTRL), then the user moves the mouse in the shape of a pre-defined gesture and then releases global hotkey. The event for the KeyDown is processed and tells my program to start recording the mouse locations until it receives the KeyUp event. This is working fine and it allows an easy way for users to enter a mouse-gesture mode. Once the KeyUp event fires and it detects the appropriate gesture, it is suppose to send certain keystrokes to the active window that the user has defined for that particular gesture they just drew. I'm using the SendKeys.Send/SendWait methods to send output to the current window. My problem is this: When the user releases their global hotkey (say CTRL), it fires the KeyUp event. My program takes its recorded mouse points and detects the relevant gesture and attempts to send the correct input via SendKeys. However, because all of this is in the KeyUp event, that global hotkey hasn't finished being processed. So, for example if I defined a gesture to send the key "A" when it is detected, and my global hotkey is CTRL, when it is detected SendKeys will send "A" but while CTRL is still "down". So, instead of just sending A, I'm getting CTRL-A. So, in this example, instead of physically sending the single character "A" it is selecting-all via the CTRL-A shortcut. Even though the user has released the CTRL (global hotkey), it is still being considered down by the system. Once my KeyUp event fires, how can I have my program wait some period of time or for some event so I can be sure that the global hotkey is truly no longer being registered by the system, and only then sending the correct input via SendKeys?

    Read the article

  • Silverlight async calls and anonymous methods....

    - by JLewis
    I know there are a couple of debates on this kind of thing.... At any rate, I have several cases where I need to populate combobox items based on enumerations returned from the WCF service. In an effort to keep code clean, I started this approach. After looking into it more, I don't think the this works as well is initially thought... I am throwing this out to get recommendations/advice/code snippets on how you would do this or how you currently do this. I may be forced to have a seperate, non anonymous method, procedure. I hate doing this for something like this but at the moment, don't see it working another way... EventHandler<GetEnumerationsForTypeCompletedEventArgs> ev = null; ev = delegate(object eventSender, GetEnumerationsForTypeCompletedEventArgs eventArgs) { if (eventArgs.Error == null) { //comboBox.ItemsSource = eventArgs.Result; // populate combox for display purposes (for now) foreach (Enumeration e in eventArgs.Result) { ComboBoxItem cbi = new ComboBoxItem(); cbi.Content = e.EnumerationValueDisplayed; comboBox.Items.Add(cbi); } // remove event so we don't keep adding new events each time we need an enumeration proxy.GetEnumerationsForTypeCompleted -= ev; } }; proxy.GetEnumerationsForTypeCompleted += ev; proxy.GetEnumerationsForTypeAsync(sEnumerationType); Basically in this example we use ev to hold the anonymous method so we can then use ev from within the method to remove it from the events once called. This prevents this method from getting called more than one time. I suspect that the comboBox local var declared before this call, but within the same method, is not always the combobox originally intended but can't really verify that yet. I may add a tag to it to do some tests and populating to verify. Sorry if this is not clear. I can elaborate more if needed. Thanks Jeff

    Read the article

  • Metaprogramming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)? EDIT: Something of the likes of this example from the Pragmatic Programmer: ...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle: P 2 # select pen 2 D # pen down W 2 # draw west 2cm N 1 # then north 1 E 2 # then east 2 S 1 # then back south U # pen up Thank you!

    Read the article

  • Time flies like an arrow demo in WinForms

    - by Benjol
    Looking at the Reactive Extensions for javascript demo on Jeff Van Gogh's blog, I thought I'd give it a try in C#/Winforms, but it doesn't seem to work so well. I just threw this into the constructor of a form (with the Rx framework installed and referenced): Observable.Context = SynchronizationContext.Current; var mousemove = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var message = "Time flies like an arrow".ToCharArray(); for(int i = 0; i < message.Length; i++) { var l = new Label() { Text = message[i].ToString(), AutoSize = true, TextAlign = ContentAlignment.MiddleCenter }; int closure = i; mousemove .Delay(closure * 150) .Subscribe(e => { l.Left = e.EventArgs.X + closure * 15 + 10; l.Top = e.EventArgs.Y; //Debug.WriteLine(l.Text); }); Controls.Add(l); } When I move the mouse, the letters seem to get moved in a random order, and if I uncomment the Debug line, I see multiple events for the same letter... Any ideas? I've tried Throttle, but it doesn't seem to make any difference. Am I just asking too much of WinForms to move all those labels around? (Cross posted on Rx Forum)

    Read the article

  • Web scraping etiquette

    - by Ash
    I'm considering writing a simple web scraping application to extract information from a website that does not seem to specifically prohibit this. I've checked for other alternatives (eg RSS, web service) to get this information, but there are none available at this stage. Despite this I've also developed/maintained a few websites myself and so I realize that if web scraping is done naively/greedily it can slow things down for other users and generally become a nuisance. So, what etiquette is involved in terms of: Number of requests per second/minute/hour. HTTP User Agent content. HTTP Referer content. HTTP Cache settings. Buffer size for larger files/resources. Legalities and licensing issues. Good tools or design approaches to use. Robots.txt, is this relevant for web scraping or just crawlers/spiders? Compression such as GZip in requests. Update Found this relevant question on Meta: Etiquette of Screen Scaping StackOverflow. Jeff Atwood's answer has some helpful recommendations. Other related StackOverflow questions: Options for html scraping Legalities of screen scraping

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • Gallio MbUnit and Team City problem

    - by Bernard Larouche
    I asked a question this morning about an integration problem between Gallio and Team City. I changed the msbuild file to use the proper syntax with the latest Gallio build script API. Thank you for that Jeff Brown but now when I tried to build the application on Team City I get the following error : An unexpected error occurred during execution of the Gallio task.[16:19:49]: [Project "CoderForTraders.msbuild.teamcity.patch.tcprojx" (RebuildSolution;RunTests target(s)):] C:\TeamCity\buildAgent\work\fa1d38b0af329d65\CoderForTraders.msbuild(9, 9): FilterParseException: Colon expected Here's line 9 : <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> and here is the whole file : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the test files and assemblies --> <ItemGroup> <TestFile Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Do you have an idea about the possible problem ?

    Read the article

  • How to explain to someone that a data structure should not draw itself, explaining separation of con

    - by leeand00
    I have another programmer who I'm trying to explain why it is that a UI component should not also be a data-structure. For instance say that you get a data-structure that contains a record-set from the "database", and you wish to display that record-set in a UI component within your application. According to this programmer (who will remain nameless, he's young and I'm teaching him...), we should subclass the data-structure into a class that will draw the UI component within our application!!!!!! And thus according to this logic, the record-set should manage the drawing of the UI. **Head Desk*** I know that asking a record-set to draw itself is wrong, because, if you wish to render the same data-structure on more than one type of component on your UI, you are going to have a real mess on your hands; you'll need to extend yet another class for each and every UI component that you render from the base-class of your record-set; I am well aware of the "cleanliness" of the of the MVC pattern (and by that what I really mean is you don't confuse your data (the Model) with your UI (the view) or the actions that take place on the data (the Controller more or less...okay not really the API should really handle that...and the Controller should just make as few calls to it as it can, telling it which view to render)) But it's certainly alot cleaner than using data-structures to render UI components! Is there any other advice I could send his way other than the example above? I understand that when you first learn OOP you go through "a stage" where you where just want to extend everything. Followed by a stage when you think that Design Patterns are the solution every single problem...which isn't entirely correct either...thanks Jeff. Is there a way that I can gently nudge this kid in the right direction? Do you have any more examples that might help explain my point to him?

    Read the article

  • JQuery Cycle fails on Page Refresh

    - by Darknight
    In a similar issue as this one: http://stackoverflow.com/questions/1719475/jquery-cycle-firefox-squishing-images I've managed to overcome the initial problem using Jeffs answer in the above link. However now I have noticed a new bug, upon page refresh it simply does not work. I have tried a hard refresh (ctrl+F5) but this does not work. However when you come page to the page it loads fine. here is my modified version (taken from Jeff's): <script type="text/javascript"> $(document).ready(function() { var imagesRemaining = $('#slideshow img').length; $('#slideshow img').bind('load', function(e) { imagesRemaining = imagesRemaining - 1; if (imagesRemaining == 0) { $('#slideshow').show(); $('#slideshow').cycle({ fx: 'shuffle', speed: 1200 }); } }); }); </script> Any ideas? I've also tried JQuery Live but could not implement it correctly. I've also tried Meta tags to force images to load. But it only works first time round.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58  | Next Page >