Search Results

Search found 13430 results on 538 pages for 'easy'.

Page 383/538 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • OAuth with RestSharp in Windows Phone

    - by midoBB
    Nearly every major API provider uses OAuth for the user authentication and while it is easy to understand the concept, using it in a Windows Phone app isn’t pretty straightforward. So for this quick tutorial we will be using RestSharp for WP7 and the API from getglue.com (an entertainment site) to authorize the user. So the first step is to get the OAuth request token and then we redirect our browserControl to the authorization URL private void StartLogin() {   var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForRequestToken("ConsumerKey", "ConsumerSecret"); var request = new RestRequest("oauth/request_token"); client.ExecuteAsync(request, response => { _oAuthToken = GetQueryParameter(response.Content, "oauth_token"); _oAuthTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); string authorizeUrl = "http://getglue.com/oauth/authorize" + "?oauth_token=" + _oAuthToken + "&style=mobile"; Dispatcher.BeginInvoke(() => { browserControl.Navigate(new Uri(authorizeUrl)); }); }); } private static string GetQueryParameter(string input, string parameterName) { foreach (string item in input.Split('&')) { var parts = item.Split('='); if (parts[0] == parameterName) { return parts[1]; } } return String.Empty; } Then we listen to the browser’s Navigating Event private void Navigating(Microsoft.Phone.Controls.NavigatingEventArgs e) { if (e.Uri.AbsoluteUri.Contains("oauth_callback")) { var arguments = e.Uri.AbsoluteUri.Split('?'); if (arguments.Length < 1) return; GetAccessToken(arguments[1]); } } private void GetAccessToken(string uri) { var requestToken = GetQueryParameter(uri, "oauth_token"); var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForAccessToken(ConsumerKey, ConsumerSecret, _oAuthToken, _oAuthTokenSecret); var request = new RestRequest("oauth/access_token"); client.ExecuteAsync(request, response => { AccessToken = GetQueryParameter(response.Content, "oauth_token"); AccessTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); UserId = GetQueryParameter(response.Content, "glue_userId"); }); } Now to test it we can access the user’s friends list var client = new RestClient("http://api.getglue.com/v2"); client.Authenticator = OAuth1Authenticator.ForProtectedResource(ConsumerKey, ConsumerSecret, GAccessToken, AccessTokenSecret); var request = new RestRequest("/user/friends"); request.AddParameter("userId", UserId,ParameterType.GetOrPost); // request.AddParameter("category", "all",ParameterType.GetOrPost); client.ExecuteAsync(request, response => { TreatFreindsList(); }); And that’s it now we can access all OAuth methods using RestSharp.

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • Checking timeouts made more readable

    - by Markus
    I have several situations where I need to control timeouts in a technical application. Either in a loop or as a simple check. Of course – handling this is really easy, but none of these is looking cute. To clarify, here is some C# (Pseudo) code: private DateTime girlWentIntoBathroom; girlWentIntoBathroom = DateTime.Now; do { // do something } while (girlWentIntoBathroom.AddSeconds(10) > DateTime.Now); or if (girlWentIntoBathroom.AddSeconds(10) > DateTime.Now) MessageBox.Show("Wait a little longer"); else MessageBox.Show("Knock louder"); Now I was inspired by something a saw in Ruby on StackOverflow: Now I’m wondering if this construct can be made more readable using extension methods. My goal is something that can be read like “If girlWentIntoBathroom is more than 10 seconds ago” 1st attempt if (girlWentIntoBathroom > (10).Seconds().Ago()) MessageBox.Show("Wait a little longer"); else MessageBox.Show("Knock louder"); So I wrote an extension for integer that converts the integer into a TimeSpan public static TimeSpan Seconds(this int amount) { return new TimeSpan(0, 0, amount); } After that, I wrote an extension for TimeSpan like this: public static DateTime Ago(this TimeSpan diff) { return DateTime.Now.Add(-diff); } This works fine so far, but has a great disadvantage. The logic is inverted! Since girlWentIntoBathroom is a timestamp in the past, the right side of the equation needs to count backwards: impossible. Just inverting the equation is no solution, because it will invert the read sentence as well. 2nd attempt So I tried something new: if (girlWentIntoBathroom.IsMoreThan(10).SecondsAgo()) MessageBox.Show("Knock louder"); else MessageBox.Show("Wait a little longer"); IsMoreThan() needs to transport the past timestamp as well as the span for the extension SecondsAgo(). It could be: public static DateWithIntegerSpan IsMoreThan(this DateTime baseTime, int span) { return new DateWithIntegerSpan() { Date = baseTime, Span = span }; } Where DateWithIntegerSpan is simply: public class DateWithIntegerSpan { public DateTime Date {get; set;} public int Span { get; set; } } And SecondsAgo() is public static bool SecondsAgo(this DateWithIntegerSpan dateAndSpan) { return dateAndSpan.Date.Add(new TimeSpan(0, 0, dateAndSpan.Span)) < DateTime.Now; } Using this approach, the English sentence matches the expected behavior. But the disadvantage is, that I need a helping class (DateWithIntegerSpan). Has anyone an idea to make checking timeouts look more cute and closer to a readable sentence? Am I a little too insane thinking about something minor like this?

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • Making CopySourceAsHtml add-on work with VS2010

    - by DigiMortal
    As there are still bloggers who use CopySourceAsHtml add-on for Visual Studio to get syntax highlighted code to their blog posts and there is no guidance in CSAH site how to make it work with Visual Studio 2010 I will give my guidance here. Almost all code in this blog is syntax highlighted by this add-on (read more from my post Visual Studio add-in: CopySourceAsHTML). Last version of CSAH is available for VS2008 but it is easy to make it work with VS2010. Just follow these steps. Close VS2010 if it is opened. Goto folder MyDocuments\Visual Studio 2010. Move to AddIns subfolder (create it if there is no such subfolder). Create file called CopySourceAsHtml.AddIn and open it in text editor. Paste the following XML to editor:   <?xml version="1.0" encoding="utf-8" standalone="no"?> <Extensibility xmlns="http://schemas.microsoft.com/AutomationExtensibility"> <HostApplication> <Name>Microsoft Visual Studio Macros</Name> <Version>10.0</Version> </HostApplication> <HostApplication> <Name>Microsoft Visual Studio</Name> <Version>10.0</Version> </HostApplication> <Addin> <FriendlyName>CopySourceAsHtml</FriendlyName> <Description>Adds support to Microsoft Visual Studio 2010 for copying source code, syntax highlighting, and line numbers as HTML.</Description> <Assembly>JTLeigh.Tools.Development.CopySourceAsHtml, Version=3.0.3215.1, Culture=neutral, PublicKeyToken=bb2a58bdc03d2e14, processorArchitecture=MSIL</Assembly> <FullClassName>JTLeigh.Tools.Development.CopySourceAsHtml.Connect</FullClassName> <LoadBehavior>1</LoadBehavior> <CommandPreload>0</CommandPreload> <CommandLineSafe>0</CommandLineSafe> </Addin> </Extensibility> Save file and close it. Run VS2010 and activate add-on if it is not activated yet. That’s it. If you are heavy user of CSAH then I recommend you to bookmark this post. :)

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API

    The past couple of projects I've been working on have included the use of the Google Maps API and geocoding service in websites for various reasons. I decided to tie together some of the lessons learned, build an ASP.NETstore locator demo, and write about it on 4Guys. Last week I published the first article in what I think will be a three-part series: Building a Store Locator ASP.NET Application Using Google Maps (Part 1). Part 1 walks through creating a demo where a user can type in an address and any stores within a (roughly) 15 mile area will be displayed in a grid.The article begins with a look at the database used to power the store locator (namely, a single table that contains one row for every location, with each location storing its store number, address, and, most important, latitude and longitude coordinates) and then turns to usingGoogle's geocoding service to translatea user-entered address into latitude and longitude coordinates. The latitude and longitude coordinates are used to find nearby stores, which are then displayed in a grid. Part 2 looks at enhancing the search results to include a map with markers indicating the position of each nearby store location. The Google Maps API, along with a bit of client-side script and server-side logic, make this actually pretty straightforward and easy to implement. Here's a screen shot of the improved store locator results. Part 3, which I plan on publishing next week, looks at how to enhance the map by using information windows to display address information when clicking a marker. Additionally, I'll show how to use custom icons for the markers so that instead of having the same marker for each nearby location the markers will be images numbered 1, 2, 3, and so on, which will correspond to a number assigned to each search result in the grid. The idea here is that by numbering the search results in the grid and the markers on the map visitors will quickly be able to see what marker corresponds to what search result. This article and demo has been a lot of fun to write and create, and I hope you enjoy reading it, too. Building a Store Locator ASP.NET Application Using Google Maps API (Part 1) Building a Store Locator ASP.NET Application Using Google Maps API (Part 2) Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Multi-threaded JOGL Problem

    - by moeabdol
    I'm writing a simple OpenGL application in Java that implements the Monte Carlo method for estimating the value of PI. The method is pretty easy. Simply, you draw a circle inside a unit square and then plot random points over the scene. Now, for each point that is inside the circle you increment the counter for in points. After determining for all the random points wither they are inside the circle or not you divide the number of in points over the total number of points you have plotted all multiplied by 4 to get an estimation of PI. It goes something like this PI = (inPoints / totalPoints) * 4. This is because mathematically the ratio of a circle's area to a square's area is PI/4, so when we multiply it by 4 we get PI. My problem doesn't lie in the algorithm itself; however, I'm having problems trying to plot the points as they are being generated instead of just plotting everything at once when the program finishes executing. I want to give the application a sense of real-time display where the user would see the points as they are being plotted. I'm a beginner at OpenGL and I'm pretty sure there is a multi-threading feature built into it. Non the less, I tried to manually create my own thread. Each worker thread plots one point at a time. Following is the psudo-code: /* this part of the code exists in display() method in MyCanvas.java which extends GLCanvas and implements GLEventListener */ // main loop for(int i = 0; i < number_of_points; i++){ RandomGenerator random = new RandomGenerator(); float x = random.nextFloat(); float y = random.nextFloat(); Thread pointThread = new Thread(new PointThread(x, y)); } gl.glFlush(); /* this part of the code exists in run() method in PointThread.java which implements Runnable */ void run(){ try{ gl.glPushMatrix(); gl.glBegin(GL2.GL_POINTS); if(pointIsIn) gl.glColor3f(1.0f, 0.0f, 0.0f); // red point else gl.glColor3f(0.0f, 0.0f, 1.0f); // blue point gl.glVertex3f(x, y, 0.0f); // coordinates gl.glEnd(); gl.glPopMatrix(); }catch(Exception e){ } } I'm not sure if my approach to solving this issue is correct. I hope you guys can help me out. Thanks.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Keeping the meshes "thickness" the same when scaling an object

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find a way to help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. Now, the thing is when the user chooses to scale the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Hope you can help, EDIT: So, I've decided to explain the situation once more, this time more detailed(hopefully). I've also made some pictures of how the scaling should look like, where is the problem and the wrong way of scaling. I this example I will be using a thick walled box, with one face missing, where each wall is made by a cuboid(but later on there will be diffrent shapes of objects, where a one of the face might be roundish, or triangle or even under some angle), scaling will be 2x on X axis. 1.This is how the default object without any scaling applied looks like: http://img856.imageshack.us/img856/4293/defaulttz.png 2.If I scale the whole object(all of the meshes) by some scale factor, the problem becomes that the "thickness" of the object walls also change(which I do not want): http://img822.imageshack.us/img822/9073/wrongwaytoscale.png 3.This is how the correct scaling should look like. Appropriate faces gets caled in this case where the scale is on X axis(top, bottom, back): http://imageshack.us/photo/my-images/163/rightwayxscale1.png/ 4.But the scale factor might not be the same for all object all of the times. In this case the back has to get scaled a bit more or it leaves gaps: http://imageshack.us/photo/my-images/9/problemwhenscaling.png/ 5.If everything goes well this is how the final object should look like: http://imageshack.us/photo/my-images/856/rightwayxscale2.png/ So, as you have might noticed there are quite a bit of things to look out when scaling. I am asking you, if any of you have any idea on how to accomplish this scaling. I have tried whole bunch of things, from scaling all of the object by the same scale factor, to subtracting and adding sizes to get the right size. But nothing I tried worked, if one mesh got scaled correctly then others didnt. Donwload the example object. English is not my first language, so I am really sorry if its hard to understand what I am saying.

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 2 - Get Hands On Experience)

    - by John Webb
    Our guest blogger, Rebekah Jackson, continues with a series of helpful hints on planning your upgrade to PeopleSoft 9.2. Get Hands-On Experience With a an Easy to Install Demo Image Once you have identified the features you believe can add value (see part 1 of this series here), we recommend that you use our Demo Images to quickly create a PeopleSoft environment where you can try out the new features. The Demo Images are virtual machines that run in VirtualBox.   They contain a fully functioning PeopleSoft environment, including the database, PeopleTools, and the application.  These images can be loaded onto nearly any sized machine that meets the specs outlined in the documentation, including a powerful desktop machine.     The image allows you to very quickly deploy a fully functioning PeopleSoft instance that you can use to explore new features and functions or run a conference room pilot. To take advantage these Demo Images: Go to the Demo Images Home Page (Doc id 1552548.1) on My Oracle Support. Use the " Demo Image Quick Start Guide" and additional documentation links on that page to download and install your desired Demo Image. New Demo Images are posted for each product area approximately every 10 weeks, available shortly after the corresponding patching image. When installed, use the Demo Image to explore the desired features and capabilities. Important Notes: - It is not required to use the Demo Images to evaluate features and functions – any 9.2 instance will support this. We recommend use of the Demo Images because the setup and configuration is dramatically faster than doing a traditional PeopleSoft install. - For those looking to explore new features and capabilities on PeopleSoft 9.1 releases, we have provided virtual machine images using the Oracle Virtual Machine technology. Details and links are available in Oracle’s PeopleSoft Virtualization Products page (Doc id 1538142.1). /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Ubuntu 12.04 - PPTP VPN is the only Internet Access

    - by user212553
    I know this has been covered. I've read dozens of posts but still have questions. I have a work server whose traffic should never leave my house without encryption. The VPN is PPTP. Currently I have a cron job that checks the status of the ppp0 adapter each minute. If the connection drops, which it does fairly often, it shuts key components down. It's fairly easy to restart PPTP with "nmcli con up id 'myVPNServer'" but there's no assurance it will reconnect and I need a better way to stop traffic (other than killing apps) when ppp0 is down. The two options I've seen discussed are the firewall (UFW, Firestarter, IPTables) or the route tables. I could be easily swayed to consider the firewall option but I focused on the route tables since no new function needs to be started. My questions involve the way the route tables change and then specifics on rules. When I start the PPTP VPN the route tables change. That suggests that if the VPN drops, the table will change back, defeating my stated intent of preventing external traffic. How can I make "sticky" changes to the route table that will persist even if the VPN connection drops? Perhaps the check boxes "Ignore automatically obtained routes" or "Use this connection only for resources on it's network" (which are part of the VPN configuration options)? It would seem that, if I can force the active VPN route table to stay in effect, even when the VPN drops, that this will effectively kill any external traffic should the VPN drop. This will give me the latitude to run a routine to restart the VPN from the command line (assuming the route table rules don't prevent me re-establishing the connection). My route table, with the VPN active is (ip route list): Any comments on what 10.10.1.1 is? $ ip route list default dev ppp0 proto static 10.10.1.1 dev ppp0 proto kernel scope link src 10.10.1.11 VPN_Server_IP_Address via 192.168.1.1 dev eth0 proto static VPN_Server_IP_Address via 192.168.1.1 dev eth0 src 192.168.1.60 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.60 metric 1

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • Optionally Running SPSecurity.RunWithElevatedPrivileges with Delgates

    - by Damon Armstrong
    I was writing some SharePoint code today where I needed to give people the option of running some code with elevated permission.  When you run code in an elevated fashion it normally looks like this: SPSecurity.RunWithElevatedPrivileges(()=> {     //Code to run }); It wasn’t a lot of code so I was initially inclined to do something horrible like this: public void SomeMethod(bool runElevated) {     if(runElevated)     {         SPSecurity.RunWithElevatedPrivileges(()=>         {             //Code to run         });     }     else     {         //Copy of code to run     } } Easy enough, but I did not want to draw the ire of my coworkers for employing the CTRL+C CTRL+V design pattern.  Extracting the code into a whole new method would have been overkill because it was a pretty brief piece of code.  But then I thought, hey, wait, I’m basically just running a delegate, so why not define the delegate once and run it either in an elevated context or stand alone, which resulted in this version which I think is much cleaner because the code is only defined once and it didn’t require a bunch of extra lines of code to define a method: public void SomeMethod(bool runElevated) {     var code = new SPSecurity.CodeToRunElevated(()=>     {         //Code to run     });     if(runElevated)     {         SPSecurity.RunWithElevatedPermissions(code);         }     else     {         Code();     } }

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • JGridView

    - by Geertjan
    JGrid was announced last week so I wanted to integrate it into a NetBeans Platform app. I.e., I'd like to use Nodes instead of the DefaultListModel that is supported natively, so that I can integrate with the Properties Window, for example: Here's how: import de.jgrid.JGrid; import java.beans.PropertyVetoException; import javax.swing.DefaultListModel; import javax.swing.JScrollPane; import javax.swing.event.ListSelectionEvent; import javax.swing.event.ListSelectionListener; import org.book.domain.Book; import org.openide.explorer.ExplorerManager; import org.openide.nodes.Node; import org.openide.util.Exceptions; public class JGridView extends JScrollPane { @Override public void addNotify() { super.addNotify(); final ExplorerManager em = ExplorerManager.find(this); if (em != null) { final JGrid grid = new JGrid(); Node root = em.getRootContext(); final Node[] nodes = root.getChildren().getNodes(); final Book[] books = new Book[nodes.length]; for (int i = 0; i < nodes.length; i++) { Node node = nodes[i]; books[i] = node.getLookup().lookup(Book.class); } grid.getCellRendererManager().setDefaultRenderer(new OpenLibraryGridRenderer()); grid.setModel(new DefaultListModel() { @Override public int getSize() { return books.length; } @Override public Object getElementAt(int i) { return books[i]; } }); grid.setUI(new BookshelfUI()); grid.getSelectionModel().addListSelectionListener(new ListSelectionListener() { @Override public void valueChanged(ListSelectionEvent e) { //Somehow compare the selected item //with the list of books and find a matching book: int selectedIndex = grid.getSelectedIndex(); for (int i = 0; i < nodes.length; i++) { String nodeName = books[i].getTitel(); if (String.valueOf(selectedIndex).equals(nodeName)) { try { em.setSelectedNodes(new Node[]{nodes[i]}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } } } } }); setViewportView(grid); } } } Above, you see references to OpenLibraryGridRenderer and BookshelfUI, both of which are part of the "JGrid-Bookshelf" sample in the JGrid download. The above is specific for Book objects, i.e., that's one of the samples that comes with the JGrid download. I need to make the above more general, so that any kind of object can be handled without requiring changes to the JGridView. Once you have the above, it's easy to integrate it into a TopComponent, just like any other NetBeans explorer view.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >