Search Results

Search found 22711 results on 909 pages for 'amazon product api'.

Page 317/909 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • I've registered my oath about 5 times now, but... (twitteR package R)

    - by user2985989
    I'm attempting to mine twitter data in R, and am having trouble getting started. I created a twitter account, an app in twitter developers, changed the settings to read, write, and access, created my access token, and followed instructions to the letter in registering it: My code: > library(twitteR) > download.file(url="http://curl.haxx.se/ca/cacert.pem", + destfile="cacert.pem") > requestURL <- "https://api.twitter.com/oauth/request_token" > accessURL <- "https://api.twitter.com/oauth/access_token" > authURL <- "https://api.twitter.com/oauth/authorize" > consumerKey <-"my key" #took this part out for privacy's sake > consumerSecret <- "my secret" #this too > twitCred <- OAuthFactory$new(consumerKey=consumerKey, consumerSecret = consumerSecret, requestURL = requestURL, accessURL = accessURL, authURL = authURL) > twitCred$handshake(cainfo="cacert.pem") To enable the connection, please direct your web browser to: https://api.twitter.com/oauth/authorize?oauth_token=zxgHXJkYAB3wQ2IVAeyJjeyid7WK6EGPfouGmlx1c When complete, record the PIN given to you and provide it here: 0010819 > registerTwitterOAuth(twitCred) [1] TRUE > save(list="twitCred", file="twitteR_credentials") And yet, this: > s <- searchTwitter('#United', cainfo="cacert.pem") [1] "Unauthorized" Error in twInterfaceObj$doAPICall(cmd, params, "GET", ...) : Error: Unauthorized I'm about to have a temper tantrum. I'd be extremely grateful if someone could explain to me what is going wrong, or, better yet, how to fix it. Thank you.

    Read the article

  • checking mongo database for data

    - by user1647484
    I'm playing around with this tutorial that uses Sinatra, backbone.js, and mongodb for the database. It's my first time using mongo. As far as I understand it the app uses both local storage and a database. it has these routes for the database. For example, it has these routes get '/api/:thing' do DB.collection(params[:thing]).find.to_a.map{|t| from_bson_id(t)}.to_json end get '/api/:thing/:id' do from_bson_id(DB.collection(params[:thing]).find_one(to_bson_id(params[:id]))).to_json end post '/api/:thing' do oid = DB.collection(params[:thing]).insert(JSON.parse(request.body.read.to_s)) "{\"_id\": \"#{oid.to_s}\"}" end After turning the server off and then on, I could see in the server getting data from the database routes 127.0.0.1 - - [17/Sep/2012 08:21:58] "GET /api/todos HTTP/1.1" 200 430 0.0033 My question is, how can I check from within the mongo shell whether the data's in the database? I started the mongo shell ./bin/mongo I selected the database 'use mydb' and then looking at the docs (http://www.mongodb.org/display/DOCS/Tutorial) I tried commands such as > var cursor = db.things.find(); > while (cursor.hasNext()) printjson(cursor.next()); but they didn't return anything.

    Read the article

  • Magento: How do I get observers to work in an external script?

    - by Laizer
    As far as I can tell, when a script is run outside of Magento, observers are not invoked when an event is fired. Why? How do I fix it? Below is the original issue that lead me to this question. The issue is that the observer that would apply the catalog rule is never called. The event fires, but the observer doesn't pick it up. I'm running an external script that loads up a Magento session. Within that script, I'm loading products and grabbing a bunch of properties. The one issue is that getFinalPrice() does not apply the catalog rules that apply to the product. I'm doing everything I know to set the session, even a bunch of stuff that I think is superfluous. Nothing seems to get these rules applied. Here's a test script: require_once "app/Mage.php"; umask(0); $app = Mage::app("default"); $app->getTranslator()->init('frontend'); //Probably not needed Mage::getSingleton('core/session', array('name'=>'frontend')); $session = Mage::getSingleton("customer/session"); $session->start(); //Probably not needed $session->loginById(122); $product = Mage::getModel('catalog/product')->load(1429); echo $product->getFinalPrice(); Any insight is appreciated.

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • Struts logic:iterate input field

    - by camilos
    I currently have the following code and the data is displayed fine. <logic:iterate name="myList" id="product" indexId="iteration" type="com.mycompany.MyBean"> <tr> <td> <bean:write name="product" property="weight"/> </td> <td> <bean:write name="product" property="sku"/> </td> <td> <bean:write name="product" property="quantity"/> </td> </tr> </logic:iterate> But now I need to make the "quantity" part modifiable. The user should be able to update that field, press submit and when its sent to the server, "myList" should automatically update with the new quantities. I've tried searching for help on this but all I keep finding is examples on how to display data only, not modify it. Any help would be appreciated.

    Read the article

  • How to query an .NET assembly's required framework (not CLR) version?

    - by Bonfire Burns
    Hi, we are using some kind of plug-in architecture in one of our products (based on .NET). We have to consider our customers or even 3rd party devs writing plug-ins for the product. The plug-ins will be .NET assemblies that are loaded by our product at run-time. We have no control about the quality or capabilities of the external plug-ins (apart from checking whether they implement the correct interfaces). So we need to implement some kind of safety check while loading the plug-ins to make sure that our product (and the hosting environment) can actually host the plug-in or deliver a meaningful error message ("The plug-in your are loading needs .NET version 42.42 - the hosting system is only on version 33.33."). Ideally the plug-ins would do this check internally, but our experience regarding their competence is so-so and in any case our product will get the blame, so we want to make sure that this "just works". Requiring the plug-in developers to provide the info in the metadata or to explicitly provide the information in the interface is considered "too complicated". I know about the Assembly.ImageRuntimeVersion property. But to my knowledge this tells me only the needed CLR version, not the framework version. And I don't want to check all of the assembly's dependencies and match them against a table of "framework version vs. available assemblies". Do you have any ideas how to solve this in a simple and maintainable fashion? Thanks & regards, Bon

    Read the article

  • C#: Passing data to forms UI using BeginInvoke

    - by Bi
    Hi I am a C# newbie and have a class that needs to pass row information to a grid in the windows form. What is the best way to do it? I have put in some example code for better understanding. Thanks. public class GUIController { private My_Main myWindow; public GUIController( My_Main window ) { myWindow = window; } public void UpdateProducts( List<myProduct> newList ) { object[] row = new object[3]; foreach (myProduct product in newList) { row[0] = product.Name; row[1] = product.Status; row[2] = product.Day; //HOW DO I USE BeginInvoke HERE? } } } And the form class below: public class My_Main : Form { //HOW DO I GO ABOUT USING THIS DELEGATE? public delegate void ProductDelegate( string[] row ); public static My_Main theWindow = null; static void Main( ) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); theWindow = new My_Main(); Application.Run(theWindow); } private void My_Main_Load( object sender, EventArgs e ) { /// Create GUIController and pass the window object gui = new GUIController( this ); } public void PopulateGrid( string[] row ) { ProductsGrid.Rows.Add(row); ProductsGrid.Update(); } } Thanks!

    Read the article

  • redirect web app results to own application

    - by vbNewbie
    Is it possible to redirect a web apps results to a second application? I cannot parse the html source. It contains the javascript functions that execute the queries but all the content is probably server side. I hope this makes sense. The owner has made the script available but I am not sure how this helps. Can I using .net call the site and redirect results perhaps to a file or database? the app accesses one of googles apis and performs searches/queries and returns results which are displayed on the site. Now all the javascript functions that perform these queries are listed in the source but I do not know javascript so it does not make much sense to me. I have used the documentation which uses the oauth protocol to access the api and have implemented that in my web app but it took me nearly a week to get the request token right and now to send requests to the api, sometimes I get one result back and sometimes none. It is frustrating me and the owner of the web app has given use of his script but he says all that happens is that my browser interacts with the google api and not his server. So I thought why not have my web app call his, since his interacts with the API flawlessly and have the results sent to my app to save in a database. I have very little experience here so pardon my ignorance

    Read the article

  • Listening to PHP function calls to intercept the returned value

    - by Lansen Q
    I am working on making use of a Web Services API offered by the hosts of our internal system. I am accessing it via PHP with the built-in SOAP offering. The API session is initiated by a remote call to a function that returns some session tokens; every call to any function thereafter will return a new session token, which must accompany the next request. I have an API Client class that is doing the bulk of the work; what I would like to do is to set something up whereby any SOAP call that is made will make sure to update the API Client class' $session variable with the new session details, and then pass the data along. So far the only way I can think of doing this is creating a new class extending the SoapClient class, with a __call function wrapper to execute the function, update the new session token, and return the results nonetheless. I'm not sure that this will a) work b) be the best way to go about this. The wrapper class would be identical to making a SOAP call, and it would return an identical result, just it would update the session token before you get your result back. Thanks! Hope I explained myself properly.

    Read the article

  • highlight page selected in pagination?

    - by Mahmoud
    Hey There i have a php page, that shows all products that are stored on the database, so everything in fine, but my problem is that i am trying to highlight or give a color when a selected number is clicked, example, let say i all my product are showing and there are 2 pages, so when the user clicks on next the next fade and number 2 is colored yellow this well help the user to on which page he is below is my php code <?php echo"<a style:'color:#FFF;font-style:normal;'> "; include "im/config.php"; include('im/ps_pagination.php'); $result = ("Select * from product "); $pager = new PS_Pagination($conn, $result, 5, 6, "product=product"); $rs = $pager->paginate(); echo"<div id='image_container'>"; echo $pager->renderFullNav(); echo "<br /><br />\n"; while($row = mysql_fetch_array($rs)){ echo" <div class='virtualpage hidepiece'><a href='gal/".$row['pro_image']."' rel='lightbox[roadtrip]' title='".$row['pro_name']." : ".$row['pro_mini_des']."' style='color:#000'><img src='thumb/".$row['pro_thumb']."' /> </a></div>"; } echo"</div>"; echo "<br /><br />\n"; echo $pager->renderFullNav(); echo"</a>"; ?> Thank everyone

    Read the article

  • problem with include in PHP

    - by EquinoX
    I have a php file called Route.php which is located on: /var/www/api/src/fra/custom/Action and the file I want to include is in: /var/www/api/src/fra/custom/ So what I have inside route php is an absolute path to the two php file I want to include: <?php include '/var/www/api/src/frapi/custom/myneighborlists.php'; include '/var/www/api/src/frapi/custom/mynodes.php'; ............ ........... ?> These two file has an array of large amount of size that I want to use in Route.php. When I do vardump($global) it just returns NULL. What am I missing here? UPDATE: I did an echo on the included file and it prints something, so therefore it is included.. however I can't get access that array... when I do a vardump on the array, it just returns NULL! I did add a global $myarray inside the function in which I want to access the array from the other php file Sample myneighborlists.php: <?php $myarray = array( 1=> array(3351=>179), 2=> array(3264=>172, 3471=>139), 3=> array(3467=>226), 4=> array(3309=>211, 3469=>227), 5=> array(3315=>364, 3316=>144, 3469=>153), 6=> array(3305=>273, 3309=>171), 7=> array(3267=>624, 3354=>465, 3424=>411, 3437=>632), 8=> array(3302=>655, 3467=>212), 9=> array(3305=>216, 3306=>148, 3465=>505), 10=> array(3271=>273, 3472=>254), 11=> array(3347=>273, 3468=>262), 12=> array(3310=>237, 3315=>237)); ?>

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • Need for J2me source code

    - by tikamchandrakar
    For J2me It strikes me as odd that you need an extra "api key" and so on. But actually, what I really want is NOT create an extra facebook application that needs to be registered on Facebook. I don't want to create any extra configuration effords necessary for the user of my application to undergo. All my user should need is his well-known login data for facebook. Everything else should be completely transparent to him. So, I thought maybe would u can do the login process, creating a request to the REST server via http. I know this would provide me with an XML. I hope that the this API will somehow automatically transform that XML into an intuitive object model that represents the facebook user data of the respective user. So, I would expect something like userData = new FacebookData(new FacebookConnection("user_name", "password")). Done. If you get, what I mean. No api key. No secret key. Just the well-known login data. Practically, the equivalent to thunderbird webmail, which allows you to access your MSN hotmail account via Thunderbird. Thunderbird webmail will automatically converts the htmls obtained from a hotmail browser login into the data structure usually passed on to a mail client. Hope you get what I mean. I was expecting the equilalent for the your API.

    Read the article

  • Javascript object properties access functions in parent constructor?

    - by Bob Spryn
    So I'm using this pretty standard jquery plugin pattern whereby you can grab an api after applying the jquery function to a specific instance. This API is essentially a javascript object with a bunch of methods and data. So I wanted to essentially create some private internal methods for the object only to manipulate data etc, which just doesn't need to be available as part of the API. So I tried this: // API returned with new $.TranslationUI(options, container) $.TranslationUI = function (options, container) { // private function? function monkey(){ console.log("blah blah blah"); } // extend the default settings with the options object passed this.settings = $.extend({},$.TranslationUI.defaultSettings,options); // set a reference for the container dom element this.container = container; // call the init function this.init(); }; The problem I'm running into is that init can't call that function "monkey". I'm not understanding the explanation behind why it can't. Is it because init is a prototype method?($.TranslationUI's prototype is extended with a bunch of methods including init elsewhere in the code) $.extend($.TranslationUI, { prototype: { init : function(){ // doesn't work monkey(); // editing flag this.editing = false; // init event delegates here for // languagepicker $(this.settings.languageSelector, this.container).bind("click", {self: this}, this.selectLanguage); } } }); Any explanations would be helpful. Would love other thoughts on creating private methods with this model too. These particular functions don't HAVE to be in prototype, and I don't NEED private methods protected from being used externally, but I want to know how should I have that requirement in the future.

    Read the article

  • Get parameter values from method at run time

    - by Landin Martens
    I have the current method example: public void MethodName(string param1,int param2) { object[] obj = new object[] { (object) param1, (object) param2 }; //Code to that uses this array to invoke dynamic methods } Is there a dynamic way (I am guessing using reflection) that will get the current executing method parameter values and place them in a object array? I have read that you can get parameter information using MethodBase and MethodInfo but those only have information about the parameter and not the value it self which is what I need. So for example if I pass "test" and 1 as method parameters without coding for the specific parameters can I get a object array with two indexes { "test", 1 }? I would really like to not have to use a third party API, but if it has source code for that API then I will accept that as an answer as long as its not a huge API and there is no simple way to do it without this API. I am sure there must be a way, maybe using the stack, who knows. You guys are the experts and that is why I come here. Thank you in advance, I can't wait to see how this is done. EDIT It may not be clear so here some extra information. This code example is just that, an example to show what I want. It would be to bloated and big to show the actual code where it is needed but the question is how to get the array without manually creating one. I need to some how get the values and place them in a array without coding the specific parameters.

    Read the article

  • What constitutes explicit creation of entities in LINQ to SQL? What elegant "solutions" are there to

    - by Marcelo Zabani
    Hi SO, I've been having problems with the rather famous "Explicit construction of entity type '##' in query is not allowed." error. Now, for what I understand, this exists because if explicit construction of these objects were allowed, tracking changes to the database would be very complicated. So I ask: What constitutes the explicit creation of these objects? In other terms: Why can I do this: Product foo = new Product(); foo.productName = "Something"; But can't do this: var bar = (from item in myDataContext.Products select new Product { productName = item.productName }).ToList(); I think that when running the LINQ query, some kind of association is made between the objects selected and the table rows retrieved (and this is why newing a Product in the first snippet of code is no problem at all, because no associations were made). I, however, would like to understand this a little more in depth (and this is my first question to you, that is: what is the difference from one snippet of code to another). Now, I've heard of a few ways to attack this problem: 1) The creation of a class that inherits the linq class (or one that has the same properties) 2) Selecting anonymous objects And this leads me to my second question: If you chose one of the the two approaches above, which one did you choose and why? What other problems did your approach introduce? Are there any other approaches?

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • Is there a unique computer identifier that can be used reliably even in a virtual machine?

    - by SaUce
    I'm writing a small client program to be run on a terminal server. I'm looking for a way to make sure that it will only run on this server and in case it is removed from the server it will not function. I understand that there is no perfect way of securing it to make it impossible to ran on other platforms, but I want to make it hard enough to prevent 95% of people to try anything. The other 5% who can hack it is not my concern. I was looking at different Unique Identifiers like Processor ID, Windows Product ID, Computer GUID and other UIs. Because the terminal server is a virtual machine, I cannot locate anything that is completely unique to this machine. Any ideas on what I should look into to make this 95% secure. I do not have time or the need to make it as secure as possible because it will defeat the purpose of the application itself. I do not want to user MAC address. Even though it is unique to each machine it can be easily spoofed. As far as Microsoft Product ID, because our system team clones VM servers and we use corporate volume key, I found already two servers that I have access to that have same Product ID Number. I have no Idea how many others out there that have same Product ID By 95% and 5% I just simply wanted to illustrate how far i want to go with securing this software. I do not have precise statistics on how many people can do what. I believe I might need to change my approach and instead of trying to identify the machine, I will be better off by identifying the user and create group based permission for access to this software.

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • Building services with the .NET framework Cont’d

    - by Allan Rwakatungu
    In my previous blog I wrote an introductory post on services and how you can build services using the .NET frameworks Windows Communication Foundation (WCF) In this post I will show how to develop a real world application using WCF The problem During the last meeting we realized developers in Uganda are not so cool – they don’t use twitter so may not get the latest news and updates from the technology world. We also noticed they mostly use kabiriti phones (jokes). With their kabiriti phones they are unable to access the twitter web client or alternative twitter mobile clients like tweetdeck , twirl or tweetie. However, the kabiriti phones support SMS (Yeeeeeeei). So what we going to do to make these developers cool and keep them updated is by enabling them to receive tweets via SMS. We shall also enable them to develop their own applications that can extend this functionality Analysis Thanks to services and open API’s solving our problem is going to be easy.  1. To get tweets we can use the twitter service for FREE 2. To send SMS we shall use www.clickatell.com/ as they can send SMS to any country in the world. Besides we could not find any local service that offers API's for sending SMS :(. 3. To enable developers to integrate with our application so that they can extend it and build even cooler applications we use WCF. In addittion , because connectivity might be an issue we decided to use WCF because if has a inbuilt queing features. We also choose WCF because this is a post about .NET and WCF :). The Code Accessing the tweets To consume twitters REST API we shall use the WCF REST starter kit. Like it name indicates , the REST starter kit is a set of .NET framework classes that enable developers to create and access REST style services ( like the twitter service). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Http; using System.Net; using System.Xml.Linq;   namespace UG.Demo {     public class TwitterService     {         public IList<TwitterStatus> SomeMethodName()         {             //Connect to the twitter service (HttpClient is part of the REST startkit classes)             HttpClient cl = new HttpClient("http://api.twitter.com/1/statuses/friends_timeline.xml");             //Supply your basic authentication credentials             cl.TransportSettings.Credentials = new NetworkCredential("ourusername", "ourpassword");             //issue an http             HttpResponseMessage resp = cl.Get();             //ensure we got reponse 200             resp.EnsureStatusIsSuccessful();             //use XLinq to parse the REST XML             var statuses = from r in resp.Content.ReadAsXElement().Descendants("status")                            select new TwitterStatus                            {                                User = r.Element("user").Element("screen_name").Value,                                Status = r.Element("text").Value                            };             return statuses.ToList();         }     }     public class TwitterStatus     {         public string User { get; set; }         public string Status { get; set; }     } }  Sending SMS Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} public class SMSService     {         public void Send(string phone, string message)         {                         HttpClient cl1 = new HttpClient();              //the clickatell XML format for sending SMS             string xml = String.Format("<clickAPI><sendMsg><api_id>3239621</api_id><user>ourusername</user><password>ourpassword</password><to>{0}</to><text>{1}</text></sendMsg></clickAPI>",phone,message);             //Post form data             HttpUrlEncodedForm form = new HttpUrlEncodedForm();             form.Add("data", xml);             System.Net.ServicePointManager.Expect100Continue = false;             string uri = @"http://api.clickatell.com/xml/xml";             HttpResponseMessage resp = cl1.Post(uri, form.CreateHttpContent());             resp.EnsureStatusIsSuccessful();         }     }

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >