Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 746/820 | < Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • Scripting Part 1

    - by rbishop
    Dynamic Scripting is a large topic, so let me get a couple of things out of the way first. If you aren't familiar with JavaScript, I can suggest CodeAcademy's JavaScript series. There are also many other websites and books that cover JavaScript from every possible angle.The second thing we need to deal with is JavaScript as a programming language versus a JavaScript environment running in a web browser. Many books, tutorials, and websites completely blur these two together but they are in fact completely separate. What does this really mean in relation to DRM? Since DRM isn't a web browser, there are no document, window, history, screen, or location objects. There are no events like mousedown or click. Trying to call alert('hello!') in DRM will just cause an error. Those concepts are all related to an HTML document (web page) and are part of the Browser Object Model or Document Object Model. DRM has its own object model that exposes DRM-related objects. In practice, feel free to use those sorts of tutorials or practice within your browser; Many of the concepts are directly translatable to writing scripts in DRM. Just don't try to call document.getElementById in your property definition!I think learning by example tends to work the best, so let's try getting a list of all the unique property values for a given node and its children. var uniqueValues = {}; var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result;  Now lets break this down piece by piece. var uniqueValues = {}; This declares a variable and initializes it as a new empty Object. You could also have written var uniqueValues = new Object(); Why use an object here? JavaScript objects can also function as a list of keys and we'll use that later to store each property value as a key on the object. var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { This gets an enumerator for the node's children. The enumerator allows us to loop through the children one by one. If we wanted to get a filtered list of children, we would instead use ChildrenWith(). When we reach the end of the child list, the enumerator will return false for MoveNext() and that will stop the loop. var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } This gets the node the enumerator is currently pointing at, then calls PropValue() on it to get the value of a property. We then make sure the prop value isn't null or the empty string, then we make sure the value doesn't already exist as a key. Assuming it doesn't we add it as a key with a value (true in this case because it makes checking for an existing value faster when the value exists). A quick word on the print() function. When viewing the prop grid, running an export, or performing normal DRM operations it does nothing. If you have a lot of print() calls with complicated arguments it can slow your script down slightly, but otherwise has no effect. But when using the script editor, all the output of print() will be shown in the Warnings area. This gives you an extremely useful debugging tool to see what exactly a script is doing. var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result; Now we build a string by looping through all the keys in uniqueValues and adding that value to our string. The last step is to simply return the result. Hopefully this small example demonstrates some of the core Dynamic Scripting concepts. Next time, we can try checking for node references in other hierarchies to see if they are using duplicate property values.

    Read the article

  • Which of these algorithms is best for my goal?

    - by JonathonG
    I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: A line is drawn between a pre-defined static point inside the region and the mouse's new position. The point where that line intersects the edge of the allowed area is found. The mouse is moved to that point. This works, but only works perfectly for a perfect circle with the pre-defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the closest point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (100 lines or around there). Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for every single mouse move event... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, or if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is "is_in_area(int x, int y)" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; }

    Read the article

  • grid layout default on wordpress theme

    - by nathan philpott
    I'm having trouble with a multi-layout option on a wordpress theme sight http://sight.wpshower.com/ the traffic have the option of a grid or a list layout at the click of a button. at present the list layout is default. I am interested in making the grid layout default . this is some of the php, i tried simply swapping the word grid for list but although this does work to an extent , if done on the loop.php page it removes the a:hover functions on the post boxes in the grid format. also if done on the index.php it switches buttons on the main index page. any ideas?? loop.php <div id="loop" class="<?php if ($_COOKIE['mode'] == 'grid') echo 'grid'; else echo 'list'; ?> clear"> <?php while ( have_posts() ) : the_post(); ?> <div <?php post_class('post clear'); ?> id="post_<?php the_ID(); ?>"> <?php if ( has_post_thumbnail() ) :?> <a href="<?php the_permalink() ?>" class="thumb"><?php the_post_thumbnail('thumbnail', array( 'alt' => trim(strip_tags( $post->post_title )), 'title' => trim(strip_tags( $post->post_title )), )); ?></a> <?php endif; ?> <div class="post-category"><?php the_category(' / '); ?></div> <h2><a href="<?php the_permalink() ?>"><?php the_title(); ?></a></h2> <!-- <div class="post-meta">by <span class="post-author"><a href="<?php echo get_author_posts_url(get_the_author_meta('ID')); ?>" title="Posts by <?php the_author(); ?>"><?php the_author(); ?></a></span> on <span class="post-date"><?php the_time(__('M j, Y')) ?></span> <em>&bull; </em><?php comments_popup_link(__('No Comments'), __('1 Comment'), __('% Comments'), '', __('Comments Closed')); ?> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> </div> --> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> <div class="post-content"><?php if (function_exists('smart_excerpt')) smart_excerpt(get_the_excerpt(), 55); ?></div> </div> <?php endwhile; ?> </div> <?php endif; ?> index.php <?php get_header(); ?> <div class="content-title"> Projects <a href="javascript: void(0);" id="mode"<?php if ($_COOKIE['mode'] == 'grid') echo ' class="flip"'; ?>></a> </div> <?php query_posts(array( 'post__not_in' => $exl_posts, 'paged' => $paged, ) ); ?> <?php get_template_part('loop'); ?> <?php wp_reset_query(); ?> <?php get_template_part('pagination'); ?> <?php get_footer(); ?>

    Read the article

  • The Sound of Two Toilets Flushing: Constructive Criticism for Virgin Atlantic Complaints Department

    - by Geertjan
    I recently had the experience of flying from London to Johannesburg and back with Virgin Atlantic. The good news was that it was the cheapest flight available and that the take off and landing were absolutely perfect. Hence I really have no reason to complain. Instead, I'd like to offer some constructive criticism which hopefully Richard Branson will find sometime while googling his name. Or maybe someone from the Virgin Atlantic Complaints Department will find it, whatever, just want to put this information out there. Arrangement of restroom facilities. Maybe next time you design an airplane, consider not putting your toilets at a right angle right next to your rows of seats. Being able to reach, without even needing to stretch your arm, from your seat to close, yet again, a toilet door that someone, someone obviously sitting very far from the toilets, carelessly forgot to close is not an indicator of quality interior design. Have you noticed how all other airplanes have their toilets in a cubicle separated from the rows of seats? On those airplanes, people sitting in the seats near the toilets are not constantly being woken up throughout the night whenever someone enters/exits the toilet, whenever the light in the toilet is suddenly switched on, and whenever one of the toilets flushes. Bonus points for Virgin Atlantic passengers in the seats adjoining the toilets is when multiple toilets are flushed simultaneously and multiple passengers enter/exit them at the same time, a bit like an unasked for low budget musical of suddenly illuminated grumpy people in crumpled clothes. What joy that brings at 3 AM is hard to describe. Seats with extra leg room. You know how other airplanes have the seats with the extra leg room? You know what those seats tend to have? Extra leg room. It's really interesting how Virgin Atlantic's seats with extra leg room actually have no extra leg room at all. It should have been a give away, the fact that these special seats are found in the same rows as the standard seats, rather than on the cusp of real glory which is where most airlines put their extra leg room seats, with the only actual difference being that they have a slightly different color. Had you called them "seats with a different color" (i.e., almost not quite green, rather than something vaguely hinting at blue), at least I'd have known what I was getting. Picture the joy at 3 AM, rudely awakened from nightmarish slumber, partly grateful to have been released from a grayish dream of faceless zombies resembling one or two of those in a recent toilet line, by multiple adjoining toilets flushing simultaneously, while you're sitting in a seat with extra leg room that has exactly as much leg room as the seats in neighboring rows. You then have a choice of things to be sincerely annoyed about. Food from the '80's. In the '80's, airplane food came in soggy containers and even breakfast, the most important meal of the day, was a sad heap of vaguely gray colors. The culinary highlight tended to be a squashed tomato, which must have been mashed to a pulp with a brick prior to being regurgitated by a small furry animal, and there was also always a piece of immensely horrid pumpkin, as well as a slice of spongy something you'd never seen before. Sausages and mash at 6 AM on an airplane was always a heavy lump of horribleness. Thankfully, all airlines throughout the world changed from this puke inducing strategy around 1987 sometime. Not Virgin Atlantic, of course. The fatty sausages and mash are still there, bringing you flashbacks to Duran Duran, which is what you were listening to (on your walkman) the last time you saw it in an airplane. Even the golden oldie "squashed tomato attached by slime to three wet peas" is on the menu. How wonderful to have all this in a cramped seat with a long row of early morning bleariness lined up for the toilets, right at your side, bumping into your elbow, groggily, one by one, one after another, more and more, fumble-open-door-silence-flush-fumble-open-door, and on and on, while you tentatively push your fork through a soggy pile of colorless mush, fighting the urge to throw up on the stinky socks of whatever nightmarish zombie is bumping into your elbow at the time. But, then again, the plane landed without a hitch, in fact, extremely smoothly, so I'm certainly not blaming the pilots.

    Read the article

  • .NET development on a Retina MacBook Pro with Windows 8

    - by Jeff
    I remember sitting in Building 5 at Microsoft with some of my coworkers, when one of them came in with a shiny new 11” MacBook Air. It was nearly two years ago, and we found it pretty odd that the OEM’s building Windows machines sucked at industrial design in a way that defied logic. While Dell and HP were in a race to the bottom building commodity crap, Apple was staying out of the low-end market completely, and focusing on better design. In the process, they managed to build machines people actually wanted, and maintain an insanely high margin in the process. I stopped buying the commodity crap and custom builds in 2006, when Apple went Intel. As a .NET guy, I was still in it for Microsoft’s stack of development tools, which I found awesome, but had back to back crappy laptops from HP and Dell. After that original 15” MacBook Pro, I also had a Mac Pro tower (that I sold after three years for $1,500!), a 27” iMac, and my favorite, a 17” MacBook Pro (the unibody style) with an SSD added from OWC. The 17” was a little much to carry around because it was heavy, but it sure was nice getting as much as eight hours of battery life, and the screen was amazing. When the rumors started about a 15” model with a “retina” screen inspired by the Air, I made up my mind I wanted one, and ordered it the day it came out. I sold my 17”, after three years, for $750 to a friend who is really enjoying it. I got the base model with the upgrade to 16 gigs of RAM. It feels solid for being so thin, and if you’ve used the third generation iPad or the newer iPhone, you’ll be just as thrilled with the screen resolution. I’m typically getting just over six hours of battery life while running a VM, but Parallels 8 allegedly makes some power improvements, so we’ll see what happens. (It was just released today.) The nice thing about VM’s are that you can run more than one at a time. Primarily I run the Windows 8 VM with four cores (the laptop is quad-core, but has 8 logical cores due to hyperthreading or whatever Intel calls it) and 8 gigs of RAM. I also have a Windows Server 2008 R2 VM I spin up when I need to test stuff in a “real” server environment, and I give it two cores and 4 gigs of RAM. The Windows 8 VM spins up in about 8 seconds. Visual Studio 2012 takes a few more seconds, but count part of that as the “ReSharper tax” as it does its startup magic. The real beauty, the thing I looked most forward to, is that beautifully crisp C# text. Consolas has never looked as good as it does at 10pt. as it does on this display. You know how it looks great at 80pt. when conference speakers demo stuff on a projector? Think that sharpness, only tiny. It’s just gorgeous. Beyond that, everything is just so responsive and fast. Builds of large projects happen in seconds, hundreds of unit tests run in seconds… you just don’t spend a lot of time waiting for stuff. It’s kind of painful to go back to my 27” iMac (which would be better if I put an SSD in it before its third birthday). Are there negatives? A few minor issues, yes. As is the case with OS X, not everything scales right. You’ll see some weirdness at times with splash screens and icons and such. Chrome’s text rendering (in Windows) is apparently not aware of how to deal with higher DPI’s, so text is fuzzy (the OS X version is super sharp, however). You’ll also have to do some fiddling with keyboard settings to use the Windows 8 keyboard shortcuts. Overall, it’s as close to a no-compromise development experience as I’ve ever had. I’m not even going to bother with Boot Camp because the VM route already exceeds my expectations. You definitely get what you pay for. If this one also lasts three years and I can turn around and sell it, it’s worth it for something I use every day.

    Read the article

  • Career guidance/advice for Junior-level Software Engineer [closed]

    - by John Do
    I have quite a few questions on my mind, so please bare with me. Please don't feel obligated to answer all of them, any as you choose will do. I'd appreciate if you could share some insight on any of these. Before I begin, some context: I currently have almost two years of professional experience as a Software Engineer, mainly developing software in Java. At this point, I feel that I have reached the peak in my career growth at the current company I am at and therefore I am looking for a new job, ideally again, as a Software Engineer. I have been interviewing for the past few months casually but have not had luck with companies I have a passion for. So, in no particular order - 1) In general, what are your thoughts on having graduate degrees in CS / Software Engineering. How much does it influence a salary increase, and do you think it's beneficial when working on real-world problems? I get the sense that a graduate degree in the field is trivial unless you really have a passion for research. 2) In general, in professional practice, how often had you have to write your own data structures and "complex" algorithms from scratch? In my own work, I have found myself relying mainly on third-party frameworks and the Java standard library to implement solutions as per business requirements. What are your thoughts on this? 3) In terms of resume, I feel the most ambivalent here. I want to be able to "blemish" my resume to a certain extent so that it stands out from others', but at the same time I do not want to over-exagerate my abilities. How do you strike a balance here? For example: I say that I am proficient in Java with data structures and algorithms. This is obviously a subjective and relative statement. I've taken the classes in my undergrad, and I've applied it in my work experience. What I feel as "prociency" can be seen as junior-level to others. How do you know what to say? Most of the time, recruiters (with no technical background) will be looking for keywords that stand out. This leads me to my next question (4). 4) Just from interviewing for the past few months (and getting plenty of rejections), I've come to realize that I may not be as proficient in data structures and algorithms as I thought I was. Do you think it's a good idea to remove the "proficient in java/data structure and algorithms"? I feel that being too hoenst on the resume will impede me from scoring opportunities to even have an interview with top-notch companies. What are your thoughts? 5) What is the absolute "must-have" knowledge going into a technical interview? I've been practicing several algorithmic and data sturcture problems now, and I feel that my abilities to solve arbitrary problems efficiently has not gotten significantly better. Do you think these abilities are something innate - it's either you have in you, or you don't? How can you teach yourself to learn, if you will? 6) How easy is it to go from industry/function to the next? I work mainly with backend technologies and I'm now interested in working with the frontend, i.e javascript,jquery,php or even mobile development. In your own experience, how did you not get pidgeon holed in your career? I feel that the choices you make now ultimately decide your future. As cliche as it sounds, I think it may be true. Here's what I mean: you've worked mainly as a backend engineer, people are interested in you doing the same thing since you've already accumulated experience in that function. How do get experience in a new function if people won't accept you because you don't already have it? It's a catch 22, you see... Are side projects the only real way to help you move from one function to another that you're truly interested in? For example: I could start writing my own mobile applications, even though I've worked mainly on the backend. Thanks so much for the long read. As a relatively new engineer to the real world, I am very humble and would like those who are experienced to shed some light. Thank you so much.

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • A*, Tile costs and heuristic; How to approach

    - by Kevin Toet
    I'm doing exercises in tile games and AI to improve my programming. I've written a highly unoptimised pathfinder that does the trick and a simple tile class. The first problem i ran into was that the heuristic was rounded to int's which resulted in very straight paths. Resorting a Euclidian Heuristic seemed to fixed it as opposed to use the Manhattan approach. The 2nd problem I ran into was when i tried added tile costs. I was hoping to use the value's of the flags that i set on the tiles but the value's were too small to make the pathfinder consider them a huge obstacle so i increased their value's but that breaks the flags a certain way and no paths were found anymore. So my questions, before posting the code, are: What am I doing wrong that the Manhatten heuristic isnt working? What ways can I store the tile costs? I was hoping to (ab)use the enum flags for this The path finder isnt considering the chance that no path is available, how do i check this? Any code optimisations are welcome as I'd love to improve my coding. public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map ) { return FindPath( startTile, endTile, map, TileFlags.WALKABLE ); } public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map, TileFlags acceptedFlags ) { List<Tile> open = new List<Tile>(); List<Tile> closed = new List<Tile>(); open.Add( startTile ); Tile tileToCheck; do { tileToCheck = open[0]; closed.Add( tileToCheck ); open.Remove( tileToCheck ); for( int i = 0; i < tileToCheck.neighbors.Count; i++ ) { Tile tile = tileToCheck.neighbors[ i ]; //has the node been processed if( !closed.Contains( tile ) && ( tile.flags & acceptedFlags ) != 0 ) { //Not in the open list? if( !open.Contains( tile ) ) { //Set G int G = 10; G += tileToCheck.G; //Set Parent tile.parentX = tileToCheck.x; tile.parentY = tileToCheck.y; tile.G = G; //tile.H = Math.Abs(endTile.x - tile.x ) + Math.Abs( endTile.y - tile.y ) * 10; //TODO omg wtf and other incredible stories tile.H = Vector2.Distance( new Vector2( tile.x, tile.y ), new Vector2(endTile.x, endTile.y) ); tile.Cost = tile.G + tile.H + (int)tile.flags; //Calculate H; Manhattan style open.Add( tile ); } //Update the cost if it is else { int G = 10;//cost of going to non-diagonal tiles G += map[ tile.parentX, tile.parentY ].G; //If this path is shorter (G cost is lower) then change //the parent cell, G cost and F cost. if ( G < tile.G ) //if G cost is less, { tile.parentX = tileToCheck.x; //change the square's parent tile.parentY = tileToCheck.y; tile.G = G;//change the G cost tile.Cost = tile.G + tile.H + (int)tile.flags; // add terrain cost } } } } //Sort costs open = open.OrderBy( o => o.Cost).ToList(); } while( tileToCheck != endTile ); closed.Reverse(); List<Tile> validRoute = new List<Tile>(); Tile currentTile = closed[ 0 ]; validRoute.Add( currentTile ); do { //Look up the parent of the current cell. currentTile = map[ currentTile.parentX, currentTile.parentY ]; currentTile.renderer.material.color = Color.green; //Add tile to list validRoute.Add( currentTile ); } while ( currentTile != startTile ); validRoute.Reverse(); return validRoute; } And my Tile class: [Flags] public enum TileFlags: int { NONE = 0, DIRT = 1, STONE = 2, WATER = 4, BUILDING = 8, //handy WALKABLE = DIRT | STONE | NONE, endofenum } public class Tile : MonoBehaviour { //Tile Properties public int x, y; public TileFlags flags = TileFlags.DIRT; public Transform cachedTransform; //A* properties public int parentX, parentY; public int G; public float Cost; public float H; public List<Tile> neighbors = new List<Tile>(); void Awake() { cachedTransform = transform; } }

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • Vendors: Partners or Salespeople?

    - by BuckWoody
    I got a great e-mail from a friend that asked about how he could foster a better relationship with his vendors. So many times when you work with a vendor it’s more of a used-car sales experience than a partnership – but you can actually make your vendor more of a partner, as long as you both set some ground-rules at the start. Sit down with your vendor, and have a heart-to-heart talk with them, explain that they won’t win every time, but that you’re willing to work with them in an honest way on both sides. Here’s the advice I sent him verbatim. I hope this post generates lots of comments from both customers and vendors. I don’t expect that you’ve had a great experience with your Microsoft reps, but I happen to work with some of the best sales teams in the business, and our clients tell us that all the time. “The key to this relationship is to keep the audience really small. Ideally there should be one person from your side that is responsible for the relationship, and one from the vendor’s side. Each responsible person should have the authority to make decisions, and to bring in other folks as needed for a given topic, project or decision.   For Microsoft, this is called an “Account Manager” – they aren’t technical, they aren’t sales. They “own” a relationship with a company. They learn what the company does, who does it, and how. They are responsible to understand what the challenges in your company are. While they don’t know the bits and bytes of everything we sell, they know what each thing does, and who to talk to about it. I get a call from an Account Manager every week that has pre-digested an issue at an organization and says to me: “I need you to set up an architectural meeting with their technical staff to get a better read on how we can help with problem X.” I do that and then report back to the Account Manager what we learned.  All through this process there’s the atmosphere of a “team”, not a “sales opportunity” per se. I’ve even recommended that the firm use a rival product, and I’ve never gotten push-back on that decision from my Account Managers.   But that brings up an interesting point. Someone pays an Account Manager and pays me. They expect something in return. At some point, you have to buy something. Not every time, not every situation – sometimes it’s just helping you with what you already bought from us. But the point is that you can’t expect lots of love and never spend any money. That’s the way business works.   Finally, don’t view the vendor as someone with their hand in your pocket – somebody that’s just trying to sell you something and doesn’t care if they ever see you again – unless they deserve it. There are plenty of “love them and leave them” companies out there, and you may have even had this experience with us, but that isn’t the case in the firms I work with. In fact, my customers get a questionnaire that asks them that exact question. “How many times have you seen your account team? Did you like your interaction with them? Can they do better?” My raises, performance reviews and general standing in my group are based on the answers the company gives.  Ask your vendor if they measure their sales and support teams this way – if not, seek another vendor to partner with.   Partnering with someone is a big deal. It involves time and effort on your part, and on the vendor’s part. If either of you isn’t pulling your weight, it just won’t work. You have every right to expect them to treat you as a partner, and they have the same right for your side.” Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle Romania Summer School

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 What would you say about a Summer School within a corporation where you can learn, play and practice? You might think that this is something usually uncommon for a company and you would be right. However, Oracle’s main value being innovation, we came up with a new project for Romanian students and graduates. We organised Oracle Summer School , offering them the opportunity to develop their soft skills and gain valuable business knowledge and exposure. How was Oracle Summer School programme organised? We focused on students and graduates’ needs and combined business experience with training and practice. The twenty four participants had different backgrounds, being interested in Software, Hardware, Finance, Marketing or other areas. The programme fulfilled each of these needs, bringing them in contact with Specialists and Managers. The first two weeks were dedicated to the company visits, business presentations and networking. The participants got an insight about employees’ activities and projects. Storytelling was also part of the program and people from different departments spent a couple of hours with the participants, sharing their experiences, knowledge and interesting stories. The Recruitment team delivered a training about the job interview skills in order to make the participants feel better prepared for a Recruitment process. The second module consisted of two weeks of Soft Skills trainings delivered by professional trainers from different departments. The participants gained useful insight on the competencies required within a business environment. The evenings were dedicated to social activities and it not very long until they started feel part of a team. The third module will take place at the end of September and will put the participants in contact with senior people from the business who will become their Mentors. What do the participants say about Oracle Summer School? “ As a fresh computer science graduate, Oracle Summer School gave me the opportunity of finding what are the technical and nontechnical skills required in a large multinational company. It was a great way of seeing how the theoretical knowledge I received during college is applied in real-life scenarios and what skills I still need to develop. “  (Cosmin Radu) “ When arriving at Oracle I had high expectations, but did not know exactly what was going to unfold because of the program's lack of precedence. Right after the first day, my feedback outgrew the initial forecast and the following weeks continued to build upon it. I had the pleasure to acquaint with brilliant people. The program was outlined on various profiles, delivering a comprehensive experience. It was very engaging, informative and nevertheless fun. “ (Vlad Manciu) „ Oracle Summer School is by far the best summer school that I have ever attended. For me it has been a great experience so far, because I’ve learned not only how to use soft skills in a corporate environment, but I’ve learned a great deal about myself as well. However, the most valuable asset of this 3-week period were the people that I’ve met: great individuals and great professionals, whom I really grew fond of.” (Alexandru Purcarea) “Applying to Oracle Summer School has been the best decision I took in regard to how to spend my summer holiday. I had the chance to do job shadowing at some of the departments I was interested in and I attended great trainings on various subjects such as time management and emotional intelligence. Moreover, I made friends with the other participants and we enjoyed going out together after “classes”.(Andreea Tudor) If you are interested in joining our team and attending our events please follow us on https://campus.oracle.com/campus/HR/emea_main.html /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What Can We Learn About Software Security by Going to the Gym

    - by Nick Harrison
    There was a recent rash of car break-ins at the gym. Not an epidemic by any stretch, probably 4 or 5, but still... My gym used to allow you to hang your keys from a peg board at the front desk. This way you could come to the gym dressed to work out, lock your valuables in your car, and not have anything to worry about. Ignorance is bliss. The problem was that anyone who wanted to could go pick up your car keys, click the unlock button and find your car. Once there, they could rummage through your stuff and then walk back in and finish their workout as if nothing had happened. The people doing this were a little smatter then the average thief and would swipe some but not all of your cash leaving everything else in place. Most thieves would steal the whole car and be busted more quickly. The victims were unaware that anything had happened for several days. Fortunately, once the victims realized what had happened, the gym was still able to pull security tapes and find out who was misbehaving. All of the bad guys were busted, and everyone can now breathe a sigh of relieve. It is once again safe to go to the gym. Except there was still a fundamental problem. Putting your keys on a peg board by the front door is just asking for bad things to happen. One person got busted exploiting this security flaw. Others can still be exploiting it. In fact, others may well have been exploiting it and simply never got caught. How long would it take you to realize that $10 was missing from your wallet, if everything else was there? How would you even know when it went missing? Would you go to the front desk and even bother to ask them to review security tapes if you were only missing a small amount. Once highlighted, it is easy to see how commonly such vulnerability may have been exploited. So the gym did the very reasonable precaution of removing the peg board. To me the most shocking part of this story is the resulting uproar from gym members losing the convenient key peg. How dare they remove the trusted peg board? How can I work out now, I have to carry my keys from machine to machine? How can I enjoy my workout with this added inconvenience? This all happened a couple of weeks ago, and some people are still complaining. In light of the recent high profile hacking, there are a couple of parallels that can be drawn. Many web sites are riddled with vulnerabilities are crazy and easily exploitable as leaving your car keys by the front door while you work out. No one ever considered thanking the people who were swiping these keys for pointing out the vulnerability. Without a hesitation, they had their gym memberships revoked and are awaiting prosecution. The gym did recognize the vulnerability for what it is, and closed up that attack vector. What can we learn from this? Monitoring and logging will not prevent a crime but they will allow us to identify that a crime took place and may help track down who did it. Once we find a security weakness, we need to eliminate it. We may never identify and eliminate all security weaknesses, but we cannot allow well known vulnerabilities to persist in our system. In our case, we are not likely to meet resistance from end users. We are more likely to meet resistance from stake holders, product owners, keeper of schedules and budgets. We may meet resistance from integration partners, co workers, and third party vendors. Regardless of the source, we will see resistance, but the weakness needs to be dealt with. There is no need to glorify a cracker for bringing to light a security weakness. Regardless of their claimed motives, they are not heroes. There is also no point in wasting time defending weaknesses once they are identified. Deal with the weakness and move on. In may be embarrassing to find security weaknesses in our systems, but it is even more embarrassing to continue ignoring them. Even if it is unpopular, we need to seek out security weaknesses and eliminate them when we find them. http://www.sans.org has put together the Common Weakness Enumeration http://cwe.mitre.org/ which lists out common weaknesses. The site navigation takes a little getting used to, but there is a treasure trove here. Here is the detail page for SQL Injection. It clearly states how this can be exploited, in case anyone doubts that the weakness should be taken seriously, and more importantly how to mitigate the risk.

    Read the article

  • How customers view and interact with a company

    The Harvard Business Review article written by Rayport and Jaworski is aptly titled “Best Face Forward” because it sheds light on how customers view and interact with a company. In the past most business interaction between customers was performed in a face to face meeting where one party would present an item for sale and then the other would decide whether to purchase the item. In addition, if there was a problem with a purchased item then they would bring the item back to the person who sold the item for resolution. One of my earliest examples of witnessing this was when I was around 6 or 7 years old and I was allowed to spend the summer in Tennessee with my Grandparents. My Grandfather had just written a book about the local history of his town and was selling them to his friends and local bookstores. I still remember he offered to pay me a small commission for every book I helped him sell because I was carrying the books around for him. Every sale he made was face to face with his customers which allowed him to share his excitement for the book with everyone. In today’s modern world there is less and less human interaction as the use of computers and other technologies allow us to communicate within seconds even though both parties may be across the globe or just next door. That being said, customers view a company through multiple access points called faces that represent the ability to interact without actually seeing a human face. As a software engineer this is a good and a bad thing because direct human interaction and technology based interaction have both good and bad attributes based on the customer. How organizations coordinate business and IT functions, to provide quality service varies based on each individual business and the goals and directives put in place by its management. According to Rayport and Jaworski, the type of interaction used through a particular access point may lend itself to be people-dominate, machine-dominate, or a combination of both. The method by which a company communicates information through an access point is a strategic choice that relates costs and customer outcomes. To simplify this, the choice is based on what can give the customer the best experience interacting with the company when the cost of the interaction is also a factor. I personally see examples of this every day at work. The company website is machine-dominate with people updating and maintaining information, our groups department is people dominate because most of the customer interaction is done at the customers location and is backed up by machine based data sources, and our sales/member service department is a hybrid because employees work in tandem with machines in order for them to assist customers with signing up or any other issue they may have. The positive and negative aspects of human and machine interfaces are a key aspect in deciding which interface to use when allowing customers to access a company or a combination of the two. Rayport and Jaworski also used MIT professor Erik Brynjolfsson preliminary catalog of human and machine strengths. He stated that humans outperform machines in judgment, pattern recognition, exception processing, insight, and creativity. I have found this to be true based on the example of how sales and member service reps at my company handle a multitude of questions and various situations with a lot of unknown variables. A machine interface could never effectively be able to handle these scenarios because there are too many variables to consider and would not have the built-in logic to process each customer’s claims and needs. In addition, he also stated that machines outperform humans in collecting, storing, transmitting and routine processing. An example of this would be my employer’s website. Customers can simply go online and purchase a product without even talking to a sales or member services representative. The information is then stored in a database so that the customer can always go back and review there order, and access their selected services. A human, no matter how smart they are would never be able to keep track of hundreds of thousands of customers let alone know what they purchased or how much they paid. In today’s technology driven economy every company must offer their customers multiple methods of accessibly in order to survive. The more of an opportunity a company has to create a positive experience for their customers, in my opinion, they more likely the customer will return to that company again. I have noticed this with my personal shopping habits and experiences. References Rayport, J., & Jaworski, B. (2004). Best Face Forward. Harvard Business Review, 82(12), 47-58. Retrieved from Business Source Complete database.

    Read the article

  • Managing Your First SharePoint Project or Team

    - by Mark Rackley
    (*editor’s note* If you have proper SharePoint Training, know the difference between a site and a site collection, and have the utmost respect for the knowledge of your SharePoint team skip this blog and go directly to meetdux.com, do not pass go, do not collect $200… otherwise, please proceed) Dear Mr. or Mrs. I-know-nothing-about-SharePoint-but-hey,-I-have-manager-in-my-title-so-I’ll-tell-you-how-to-your-job, Thank you so much for joining the Acme corporation. We appreciate your eagerness and willingness to jump in and help us accomplish all of our goals here at acme (these roadrunner rockets don’t make themselves). You may have noticed that we have this thing called SharePoint lying around and we have invested some time in money to make it not a complete piece of garbage. So, I thought I’d give you some pointers to help make your stay here enjoyable and productive. Yeah… you don’t really know SharePoint Just because you had a mysite at your last organization or had a SharePoint 2003 team site does NOT mean you comprehend the vastness that is SharePoint. You don’t know what’s going on behind the scenes. You don’t know what should and should not be done. No, we CAN’T just query the SQL database directly. Yes, it really does take that long. No, we can’t do that out-of-the-box. Your experience doesn’t mean as much as you think it means… Yes, I’m aware that you co-created the internet with Al Gore and have been managing projects since I was blowing up GI Joe figures with firecrackers, however SharePoint is not like anything you have worked with before from a management perspective. Please don’t tell us the proper way to do our job or tell us how “you” would do it, and PLEASE don’t utter the words “I used to do some .NET development so let me know if you get stuck and need some guidance.” It MAY be possible for a incredible project manager to manage a SharePoint project and not understand the technology, but if you force your ideas on us or treat us like we don’t really know what we’re doing then you will prove yourself to NOT be one of those types. Oh no you didn’t… Please don’t tell us how you can bring in a group of guys of Kazakhstan to do the project for $20/hr. There are many companies out there who can do some really crappy SharePoint work and we don’t want to be stuck maintaining their junk. Do you know what it means to deploy a solution? Neither do some of those companies out there. However, there are are few AWESOME consulting firms out there but $150/hr is cheap for these guys. Believe me, it’s worth it though. You get what you pay for! Show us some respect We truly do appreciate and value your opinion and experience, but when we tell you something is different in SharePoint don’t be condescending and dismiss OUR experience and opinions. We have spent a lot of time and energy learning a very complicated technology that can open up a world of possibilities when used properly. We just want to make sure it is used properly. It’s not the same as .NET development. It’s not like a regular web application. There’s more going on behind the scenes than you can possibly fathom. Have a little faith in us please and listen when we talk. You may actually learn a thing or two. Take some time to learn the technology There is hope… you don’t have to be totally worthless. Take some time to learn SharePoint. Learn what it is and what it can do. Invest some time in learning our SharePoint environment. What’s our logical architecture and taxonomy? What governance do we have in place? If you just thought “huh?” then yes, I’m talking to you. Sincerely, Your SharePoint Team (This rant is not pointed at any particular organization or person. If you think it’s about you, you are wrong. This is just a general rant based upon things people have told me and things I’ve seen. If you don’t think it applies to you, please move on. If you think you might be guilty of handling your SharePoint team the wrong way, then just please listen, learn, and have a little faith in your team. You all have the same goal in mind. Also, take the time to learn something about SharePoint, you will all be less frustrated with each other.)

    Read the article

  • Creating SparseImages for Pivot

    - by John Conwell
    Learning how to programmatically make collections for Microsoft Live Labs Pivot has been a pretty interesting ride. There are very few examples out there, and the folks at MS Live Labs are often slow on any feedback.  But that is what Reflector is for, right? Well, I was creating these InfoCard images (similar to the Car images in the "New Cars" sample collection that that MS created for Pivot), and wanted to put a Tag Cloud into the info card.  The problem was the size of the tag cloud might vary in order for all the tags to fit into the tag cloud (often times being bigger than the info card itself).  This was because the varying word lengths and calculated font sizes. So, to fix this, I made the tag cloud its own separate image from the info card.  Then, I would create a sparse image out of the two images, where the tag cloud fit into a small section of the info card.  This would allow the user to see the info card, but then zoom into the tag cloud and see all the tags at a normal resolution.  Kind'a cool. But...I couldn't find one code example (not one!) of how to create a sparse image.  There is one page on the SeaDragon site (http://www.seadragon.com/developer/creating-content/deep-zoom-tools/) that gives over the API for creating images and collections, and it sparsely goes over how to create a sparse image, but unless you are familiar with the API already, the documentation doesn't help very much. The key is the Image.ViewportWidth and Image.ViewportOrigin properties of the image that is getting super imposed on the main image.  I'll walk through the code below.  I've setup a couple Point structs to represent the parent and sub image sizes, as well as where on the parent I want to position the sub image.  Next, create the parent image.  This is pretty straight forward.  Then I create the sub image.  Then I calculate several ratios; the height to width ratio of the sub image, the width ratio of the sub image to the parent image, the height ratio of the sub image to the parent image, then the X and Y coordinates on the parent image where I want the sub image to be placed represented as a ratio of the position to the parent image size. After all these ratios have been calculated, I use them to calculate the Image.ViewportWidth and Image.ViewportOrigin values, then pass the image objects into the SparseImageCreator and call Create. The key thing that was really missing from the API documentation page is that when setting up your sub images, everything is expressed in a ratio in relation to the main parent image.  If I had known this, it would have saved me a lot of trial and error time.  And how did I figure this out?  Reflector of course!  There is a tool called Deep Zoom Composer that came from MS Live Labs which can create a sparse image.  I just dug around the tool's code until I found the method that create sparse images.  But seriously...look at the API documentation from the SeaDragon size and look at the code below and tell me if the documentation would have helped you at all.  I don't think so!   public static void WriteDeepZoomSparseImage(string mainImagePath, string subImagePath, string destination) {     Point parentImageSize = new Point(720, 420);     Point subImageSize = new Point(490, 310);     Point subImageLocation = new Point(196, 17);     List<Image> images = new List<Image>();     //create main image     Image mainImage = new Image(mainImagePath);     mainImage.Size = parentImageSize;     images.Add(mainImage);     //create sub image     Image subImage = new Image(subImagePath);     double hwRatio = subImageSize.X/subImageSize.Y;            // height width ratio of the tag cloud     double nodeWidth = subImageSize.X/parentImageSize.X;        // sub image width to parent image width ratio     double nodeHeight = subImageSize.Y / parentImageSize.Y;    // sub image height to parent image height ratio     double nodeX = subImageLocation.X/parentImageSize.X;       //x cordinate position on parent / width of parent     double nodeY = subImageLocation.Y / parentImageSize.Y;     //y cordinate position on parent / height of parent     subImage.ViewportWidth = (nodeWidth < double.Epsilon) ? 1.0 : (1.0 / nodeWidth);     subImage.ViewportOrigin = new Point(         (nodeWidth < double.Epsilon) ? -1.0 : (-nodeX / nodeWidth),         (nodeHeight < double.Epsilon) ? -1.0 : ((-nodeY / nodeHeight) / hwRatio));     images.Add(subImage);     //create sparse image     SparseImageCreator creator = new SparseImageCreator();     creator.Create(images, destination); }

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Nodemanager Init.d Script

    - by john.graves(at)oracle.com
    I’ve seen many of these floating around.  This is my favourite on an Ubuntu based machine. Just throw it into the /etc/init.d directory and update the following lines: export MW_HOME=/opt/app/wls10.3.4 user='weblogic' Then run: update-rc.d nodemanager default Everything else should be ok for 10.3.4. #!/bin/sh # ### BEGIN INIT INFO # Provides: nodemanager # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: WebLogic Nodemanager ### END INIT INFO # nodemgr Oracle Weblogic NodeManager service # # chkconfig: 345 85 15 # description: Oracle Weblogic NodeManager service # ### BEGIN INIT INFO # Provides: nodemgr # Required-Start: $network $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: Oracle Weblogic NodeManager service. # Description: Starts and stops Oracle Weblogic NodeManager. ### END INIT INFO # Source function library. . /lib/lsb/init-functions # set Weblogic environment defining CLASSPATH and LD_LIBRARY_PATH # to start/stop various components. export MW_HOME=/opt/app/wls10.3.4 # # Note: # The setWLSEnv.sh not only does a good job of setting the environment, # but also advertises the fact explicitly in the console! Silence it. # . $MW_HOME/wlserver_10.3/server/bin/setWLSEnv.sh > /dev/null # set NodeManager environment export NodeManagerHome=$WL_HOME/common/nodemanager NodeManagerLockFile=$NodeManagerHome/nodemanager.log.lck # check JAVA_HOME if [ -z ${JAVA_HOME:-} ]; then export JAVA_HOME=/opt/sun/products/java/jdk1.6.0_18 fi exec=$MW_HOME/wlserver_10.3/server/bin/startNodeManager.sh prog='nodemanager' user='weblogic' is_nodemgr_running() { local nodemgr_cnt=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ wc -l` echo $nodemgr_cnt } get_nodemgr_pid() { nodemgr_pid=0 if [ `is_nodemgr_running` -eq 1 ]; then nodemgr_pid=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ tr -s ' ' | \ cut -d' ' -f2` fi echo $nodemgr_pid } check_nodemgr_status () { local retval=0 local nodemgr_cnt=`is_nodemgr_running` if [ $nodemgr_cnt -eq 0 ]; then if [ -f $NodeManagerLockFile ]; then retval=2 else retval=3 fi elif [ $nodemgr_cnt -gt 1 ]; then retval=4 else retval=0 fi echo $retval } start() { ulimit -n 65535 [ -x $exec ] || exit 5 echo -n $"Starting $prog: " su $user -c "$exec &" retval=$? echo return $retval } stop() { echo -n $"Stopping $prog: " kill -s 9 `get_nodemgr_pid` &> /dev/null retval=$? echo [ $retval -eq 0 ] && rm -f $NodeManagerLockFile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { local retval=`check_nodemgr_status` if [ $retval -eq 0 ]; then echo "$prog (pid:`get_nodemgr_pid`) is running..." elif [ $retval -eq 4 ]; then echo "Multiple instances of $prog are running..." else echo "$prog is stopped" fi return $retval } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo -n "Usage: $0 {" echo -n "start|" echo -n "stop|" echo -n "status|" echo -n "restart|" echo -n "condrestart|" echo -n "try-restart|" echo -n "reload|" echo -n "force-reload" echo "}" exit 2 esac exit $? .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

< Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >