Search Results

Search found 7217 results on 289 pages for 'd nice'.

Page 197/289 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Set iPhone Style Location Based Alerts On Your Android Device With Google Now

    - by Gopinath
    Location based alerts of iPhone are very useful. You can set an alert to popup as soon as you reach a specific location like “Pickup milk and eggs” when I’m near a grocery store. This feature was missing in Android for a long time, but last week at Google I/O conference Google released an update to Google Now which supports location based alerts. To setup a location based alert 1. Launch Google Now 2. Type or say add reminder 3. By default it shows time based alert interface, switch it location based by touching Location icon 4. Set reminder text, choose a location and touch Set reminder 5. Your alert is set now and as soon as you are close by the specified location, you’ll see an alert on your device. This is a nice feature and I’m using it quite often for the past couple of days.  There are couple of things missing from the current version of Google Now location based alerts– recurring alerts and ability to set alerts on leaving a specific location. It is not possible to recur location based alerts. You will be alerted only once as soon as you reach the location and it is not possible to repeat the alert next time you visit the location. Lets say you want to be reminded to say hi to friend’s parents whenever you are travelling close by their home. It does not work. The second missing feature is something basic and some how Google did not incorporate in their first iteration. Lets say you are at office now and you want to set up alert to pickup flowers when you leave office. Sounds like a simple use case for location based alerts right? But there is no way to set this type of alerts. Google Now alerts you as soon as you reach a location, but not when you leave a location. Do you have an Android that supports Google Now? If so what are your thoughts on location based alerts?

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • Should I install ubuntu on USB instead of HDD dual-boot?

    - by user2147243
    I had Ubuntu 12.04 installed as dual-boot OS on top of Vista on my laptop. Hacked the grub settings to default to Vista (instead of the default Ubuntu -- pain) on startup, and all was OK for occasional Ubuntu use for past 6 months. Then last week I got a strange message about 'lack of disk space' (~50MB free) when installing pxyplot, even though there was still about 6GB free disk space when I checked later. Then today the Ubuntu wouldn't load at all, and checking the HDD partitions in Vista it looked like the 15GB Ubuntu partition was now three smaller partitions! So, I got rid of those partitions and expanded the Vista partition to use the reclaimed space. Now can't restart ('grub rescue' appears and doesn't 'rescue' anything), so I'll have to do a boot recovery using a Vista installation CD. (Not a particularly user-friendly failure mode of the dual-boot installation!) I now have to decide to either a) try installing ubuntu on the HDD again, but don't want to stuff up my Vista ever again, as that is my most used OS, or b) install Ubuntu on a 16GB USB 3.0 stick. Apparently performance from USB won't be as good as from HDD, and running OS from USB stick does lots of r/w so the stick may fail after a few years! Perhaps installing Ubuntu on live USB and setup to then run in RAM would alleviate the performance/USB lifespan problems? If I create a live-USB for Ubuntu OS, will it boot off that when I restart the laptop with it plugged in? Or will I have to change the laptop setting for boot-order whenever I want to boot Ubuntu instead of Vista (that would be even more painful than the grub default boot order putting Ubuntu ahead of the existing Vista OS!) -- update: I recovered my Vista setup using Iolo SystemMechanic Disaster Recovery Tool, and created a bootable USB of Ubuntu 13.10 on an 8GB USB3.0 pendrive, with 4GB of 'persistence' to allow saving of settings, install some packages etc. It worked OK for a couple of test boots, but once I changed the time and desktop wallpaper, the next Ubuntu reboot crashed and I then couldn't get it to boot successfully. So I decided to install Ubuntu 12.04 LTS as a dual-boot again, but this time instead of partitioning the HDD and installing from an ISO DVD I used the wubi.exe tool to install Ubuntu as a dual-boot. Worked very well, although one oddity was that, despite asking how big the make the partition (20GB), the installed Ubuntu appears to be happily installed somewhere within the Vista NTFS file system (no partition shows up in Windows disk manager, and in Ubuntu disk management tool the entire 133 GB of HDD is showing, with ~40GB free space). A nice feature of installing the dual-boot using wubi is that the laptop now uses Windows boot manager on startup, with Vista as the default OS and Ubuntu happily listed as second on the list. So far so good.

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • My co-worker has not been doing such a good job for the past decade. What do I do? [closed]

    - by stijn
    Possible Duplicate: How do I approach a coworker about his or her code quality? I started working with him almost a decade ago and back then I had never really programmed before, being a young hardware engineer. Right now however I have made quite some progress in all areas being part of software design and i am much, much more skilled than my co-worker who is 15 years older and has been programming more than twice as long. He is super nice and definitely smart enough, but lately his lack of skill and performance are starting to drag me down because we're more and more working on the same codebase. And soon we are going to do a quite ambitious start from scratch creating a whole new hard/software system. I feel it is time to address all issues now, but i do not know how to start. Here are some of the things that I would like to see him improve on: no consistent usage of style, spaces nor tabs (eg if(something ) a =b ) adds newlines around pieces of code to make it easier to read, then commits those with messages like 'no changes made' overall commit messages are useless and so are most of the comments, if there are any (eg 'remove solves for bug Rik' if Rik reported a bug). There is no function/class documentation. lots of spelling errors, in both English and native language, which sometimes are mixed 6/7/8 level deep deep nesting is no exception, a lot of functions start with one level already like if(ptr!=Null){ even when ptr is the result of allocation via new in the constructor numerous source files have over 10k lines of those lines, a major part is simply a result of copy-pasting functionality instead of using a function. This includes copying comments so we end up with 50 occurrences of var=NULL; //TODO TEST this!!!!!!! another part is hundreds of lines of dead code knows what versioning does, yet comments out old code and places new code underneath it when making changes coding skills are below par, especially for the type of rather high precision applications we do. Yet somehow, after a lot of trying and testing, stuff starts to work. But then breaks again some time later because every change casues a waterfall effect. violates every single item in the C++ FAQ lite, practices every bad practice I can think of still doesn't know how to properly use the debugger, but spends hours inspecting messy logfiles in notepad on a tiny laptop screen. Does not make any adjustments to the settings of the software he uses. Never uses keyboard shortcuts. does not seem to progress or learn new things at all. Work rather slow, mostly due to the lack of planning and incorrect usage of tools. How does one deal with this? For starters, how do I make him aware of all these problems? Should I tell the staff about it? And the next step, how to get him to learn new things and adopt another way of working?

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • What is 'Ubuntu Unity' (for the Desktop)?

    - by Martin
    Ok, so there's the buzz of Canonical (wanting to) switch for new Ubuntu version from the GNOME default desktop to their own Unity shell. (I hope that's accurate.) It seems I can not totally fathom what Unity actually is. For looking at its homepage it currently is firmly targeted at netbooks and the somehow different usage model on these. Is it a classical desktop? -- Taskbar? Shortcuts? Is the difference between Ubuntu(GNOME)+Unity more/less pronounced than the difference between Ubuntu and Kubuntu? Will "my parents" be able to get the interface if they've been using the classical gnome desktop so far? Edit: I would not like to split this up into more specific questions, as What is Unity? is exactly what the people I set up Ubuntu boxes for will ask me if they hear that the newer Ubuntu version is using that instead of the Desktop -- and it might well happen someone phrases it like that :-) I will certainly not give them the link to the HP as the explanation there does not lay out if it is a desktop or something more or something less: (It does not for me - therefore I'm asking here.) Unity is designed for netbooks and related touch-based devices. It includes [...] that makes it fast and easy to access [...] while removing screen elements that are rarely used in mobile and netbook computing. (emphasis mine) -- the explanation there doesn't even mention the desktop-PC! Unity has a vertical task management panel on the left-hand side and a menu panel at the top of the screen. [...] This sounds like a re-themed normal desktop. Clicking on an icon will give the target application focus if it is already running or launch it if it is not already running. If you click the ... Aha. Sounds like Windows 7. ... icon of an application that already has focus, Unity will activate an Expose-style view of all the open windows associated with that application. No clue what that's supposed to be. So it would really be nice if someone could explain for non desktop-design-terms experts what Unity is.

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • How are you coping with Ubuntu's Unity app launcher? (It auto-hides, can't minimize apps)

    - by Bad Learner
    [Firstly, let me tell you that this cannot be subjective in anyway, as I think at least Ubuntu beginners will have these questions boggling in their mind; and yes, this is a question that has a definite answer - - so, I am completely within the rules.] Okay, coming to the point, I see that Ubuntu uses Unity since v10.xx (netbook edition?) and carried the same to v11.04 & v11.10. As someone who's stuck to Windows for all these years, it's somewhat difficult to cope with Ubuntu's Unity, for the following reasons: [1] The Unity app launcher (to the screen's left) auto-hides when a window is maximized. [2]- And once launched, apps cannot be minimized by clicking the app's icon in the launcher. I have to go to the top-left of the screen and click the "_" button. I do know I can fix these issues by installing some configuration tool. But the thing is, if that's how it's meant to work, Canonical/Ubuntu would have designed it that way. But they didn't. Why? w.r.t above points [1], [2]: [1] EDITED: So, does it mean, it's good to work without maximizing the windows? Because if I maximize the window, the app launcher hides. And I need to hover the mouse to the left of the screen, wait a bit (even if it's a sec or even less, I can still feel the lag), and then click on the next app icon in the launcher to switch to it. I do know, I can use Alt+TAB to switch, but I am not sure which window comes next. This, I feel, isn't productive. Also, this makes me feel, Ubuntu is designed for large screens (it's nice on my 1920x1080p screen), because I can have two windows side-by-side or something like that on a large screen. This is not possible on smaller screens. [2]- Being able to minimize an application's window by clicking on its icon in the launcher (just like it works on Windows & probably elsewhere) would have been great, rather than having to go to the top-left and clicking the _ (minimize) button which brings up the app launcher itself (from hiding) most of the time. It's too tiring to have these small issues in the UI. I really would like to know how you are coping with these issues the way they are?

    Read the article

  • "Hello World" in C++ AMP

    - by Daniel Moth
    Some say that the equivalent of "hello world" code in the data parallel world is matrix multiplication :) Below is the before C++ AMP and after C++ AMP code. For more on what it all means, watch the recording of my C++ AMP introduction (the example below is part of the session). void MatrixMultiply(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W ) { for (int y = 0; y < M; y++) { for (int x = 0; x < N; x++) { float sum = 0; for(int i = 0; i < W; i++) { sum += vA[y * W + i] * vB[i * N + x]; } vC[y * N + x] = sum; } } } Change the function to use C++ AMP and hence offload the computation to the GPU, and now the calling code (which I am not showing) needs no changes and the overall operation gives you really nice speed up for large datasets…  #include <amp.h> using namespace concurrency; void MatrixMultiply(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W ) { array_view<const float,2> a(M, W, vA); array_view<const float,2> b(W, N, vB); array_view<writeonly<float>,2> c(M, N, vC); parallel_for_each( c.grid, [=](index<2> idx) mutable restrict(direct3d) { float sum = 0; for(int i = 0; i < a.x; i++) { sum += a(idx.y, i) * b(i, idx.x); } c[idx] = sum; } ); } Again, you can understand the elements above, by using my C++ AMP presentation slides and recording… Stay tuned for more… Comments about this post welcome at the original blog.

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Why does my code dividing a 2D array into chunks fail?

    - by Borog
    I have a 2D-Array representing my world. I want to divide this huge thing into smaller chunks to make collision detection easier. I have a Chunk class that consists only of another 2D Array with a specific width and height and I want to iterate through the world, create new Chunks and add them to a list (or maybe a Map with Coordinates as the key; we'll see about that). world = new World(8192, 1024); Integer[][] chunkArray; for(int a = 0; a < map.getHeight() / Chunk.chunkHeight; a++) { for(int b = 0; b < map.getWidth() / Chunk.chunkWidth; b++) { Chunk chunk = new Chunk(); chunkArray = new Integer[Chunk.chunkWidth][Chunk.chunkHeight]; for(int x = Chunk.chunkHeight*a; x < Chunk.chunkHeight*(a+1); x++) { for(int y = Chunk.chunkWidth*b; y < Chunk.chunkWidth*(b+1); y++) { // Yes, the tileMap actually is [height][width] I'll have // to fix that somewhere down the line -.- chunkArray[y][x] = map.getTileMap()[x*a][y*b]; // TODO:Attach to chunk } } chunkList.add(chunk); } } System.out.println(chunkList.size()); The two outer loops get a new chunk in a specific row and column. I do that by dividing the overall size of the map by the chunkSize. The inner loops then fill a new chunkArray and attach it to the chunk. But somehow my maths is broken here. Let's assume the chunkHeight = chunkWidth = 64. For the first Array I want to start at [0][0] and go until [63][63]. For the next I want to start at [64][64] and go until [127][127] and so on. But I get an out of bounds exception and can't figure out why. Any help appreciated! Actually I think I know where the problem lies: chunkArray[y][x] can't work, because y goes from 0-63 just in the first iteration. Afterwards it goes from 64-127, so sure it is out of bounds. Still no nice solution though :/ EDIT: if(y < Chunk.chunkWidth && x < Chunk.chunkHeight) chunkArray[y][x] = map.getTileMap()[y][x]; This works for the first iteration... now I need to get the commonly accepted formula.

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • Rendering My Fault Message

    - by onefloridacoder
    My coworkers setup  a nice way to get the fault messages from our service layer all the way back to the client’s service proxy layer.  This is what I needed to work on this afternoon.  Through a series of trials and errors I finally figured this out. The confusion was how I was looking at the exception in the quick watch viewer.  It appeared as though the EventArgs from the service call had somehow magically been cast back to FaultException(), not FaultException<T>.  I drilled into the EventArgs object with the quick watch and I copied this code to figure out where the Fault message was hiding.  Further, when I copied this quick watch code into the IDE I got squigglies.  Poop. 1: ((System 2: .ServiceModel 3: .FaultException<UnhandledExceptionFault>)(((System.Exception)(e.Result)).InnerException)).Detail.FaultMessage I wont bore you with the details but here’s how it turned out.   EventArgs which I’m calling “e” is not the result such as some collection of items you’re expecting.  It’s actually a FaultException, or in my case FaultException<T>.  Below is the calling code and the callback to handle the expected response or the fault from the completed event. 1: public void BeginRetrieveItems(Action<ObservableCollection<Model.Widget>> FindItemsCompleteCallback, Model.WidgetLocation location) 2: { 3: var proxy = new MyServiceContractClient(); 4:  5: proxy.RetrieveWidgetsCompleted += (s, e) => FindWidgetsCompleteCallback(FindWidgetsCompleted(e)); 6:  7: RetrieveWidgetsRequest request = new RetrieveWidgetsRequest { location.Id }; 8:  9: proxy.RetrieveWidgetsAsync(request); 10: } 11:  12: private ObservableCollection<Model.Widget> FindItemsCompleted(RetrieveWidgetsCompletedEventArgs e) 13: { 14: if (e.Error is FaultException<UnhandledExceptionFault>) 15: { 16: var fault = (FaultException<UnhandledExceptionFault>)e.Error; 17: var faultDetailMessage = fault.Detail.FaultMessage; 18:  19: UIMessageControlDuJour.Show(faultDetailMessage); 20: return new ObservableCollection<BinInventoryItemCountInfo>(); 21: } 22:  23: var widgets = new ObservableCollection<Model.Widget>(); 24:  25: if (e.Result.Widgets != null) 26: { 27: e.Result.Widgets.ToList().ForEach(w => widgets.Add(this.WidgetMapper.Map(w))); 28: } 29:  30: return widgets; 31: }

    Read the article

  • JavaOne 2012: Camel, Twitter, Coherence, Wicket and GlassFish

    - by Bruno.Borges
    Before joining Oracle as Product Manager for WebLogic and GlassFish for Latin America, at the beggining of this year I proposed two talks to JavaOne USA that I had been presenting in Brazil for quite a while. One of them I presented last year at ApacheCon in Vancouver, Canada as well in JavaOne Brazil. In June I got the news that they were accepted as Alternate Sessions. Surprisingly enough, few weeks later and at the same time I joined Oracle, I received the news that they were officially accepted and put on schedule. Tomorrow I'll be flying to San Francisco, to my first JavaOne in the United States, and I wanted to share with you what I'm going to present there. My two sessions are these ones: Wed, 10/03, 4:30pm - CON2989 Leverage Enterprise Integration Patterns with Apache Camel and TwitterOn this one, you will be introducted to the Apache Camel framework that I had been talking about in Brazil at conferences, before joining Oracle, and to a component I contributed to integrate with Twitter. Also, you will have a preview of a new component I've been working on to integrate Camel with the Oracle Coherence distributed cache. Thu, 10/04, 3:30pm - CON3395 How Scala, Wicket, and Java EE Can Improve Web DevelopmentThis one I've been working on for quite a while. It was based on an idea to have an architecture that could be as agile as frameworks and technologies such as Ruby on Rails, PHP or Python, for rapid web development. You will be introduced to the Apache Wicket framework, another Apache project I enjoy working with and gave lots of talks at Brazilian conferences, including JavaOne Brazil, JustJava, QCon SP, and The Developers Conference. You will also be introduced to the Scala language and how to create nice DSLs to boost productiveness. And last but not least, the Java EE 6 platform, that offers an awesome improvement from previous versions with its CDI, JPA, EJB3 and JAX-RS features for web development. Other events I will be participating during my stay in SF: Geeks Bike Ride GlassFish Community Event GlassFish and Friends Party    If you have any other event to suggest, please do suggest! It's my first JavaOne and I'm really looking forward to enjoying everything. See you guys in a few days!!

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >