Search Results

Search found 59 results on 3 pages for 'lifespan'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Azure Blob storage defrag

    - by kaleidoscope
    The Blob Storage is really handy for storing temporary data structures during a scaled-out distributed processing. Yet, the lifespan of those data structures should not exceed the one of the underlying operation, otherwise clutter and dead data could potentially start filling up your Blob Storage Temporary data in cloud computing is very similar to memory collection in object oriented languages, when it's not done automatically by the framework, temp data tends to leak. In particular, in cloud computing,  it's pretty easy to end up with storage leaks due to: Collection omission. App crash. Service interruption. All those events cause garbage to accumulate into your Blob Storage. Then, it must be noted that for most cloud apps, I/O costs are usually predominant compared to pure storage costs. Enumerating through your whole Blob Storage to clean the garbage is likely to be an expensive solution. Lokesh, M

    Read the article

  • virtualbox host | Ubuntu vs XP

    - by iambriansreed
    In order to lengthen the lifespan of my machine I am replacing the weakest link, the hard drive and installing a new OS. I had planned on using xp pro as my virtualbox host and ubuntu as guest. After messing with ubuntu desktop and server I am really impressed and am thinking of reversing the virtualbox setup; ubuntu host xp guest. I would use XP for Adobe Fireworks, Netflix, and iTunes (maybe) that's pretty much it. Any reason not to do ubuntu host with xp guest? I know the xp vbox will run slower as a guest but really how much slower? It's a desktop. 4gb ram, 500gb disk, Pent D 3.2 ghz

    Read the article

  • Making a Live Thumb drive boot with Persistent files, settings AND *drivers* that load on boot?

    - by Luke Stanley
    I have seen https://wiki.ubuntu.com/LiveUsbPendrivePersistent but it's a mess. What methods support persistent drivers as well as files and settings and don't screw up lifespan of the flash drive? I'd like to see your personal recommendations on say, Portable Linux, USB Creator, Remastersys + Unetbootin, etc Backstory: I have a Inspiron 1525 that's hard drive has been slowly dying. I want to switch to a live USB/CD/DVD system until I can get it repaired but my laptops internal wifi device requires a network connection by another means for Xubuntu to let it work, and then I have to enter my Wifi key again, and THEN I have to reinstall Skype etc... I'd be damned every time I have to shut the laptop down. I'm ok with making a shell script for installing apps and copying settings as required but a good persistent install should make this old hat and slow and it doesn't take care of drivers. The last time I tried making an ISO with Remastersys it didn't seem to copy all the required settings.

    Read the article

  • Macbook Pro 15-inch replacement battery.

    - by ricbax
    So I've finally got to the 300+ cycles with my MBP 2008's original battery. Apple is pretty much "on the money" too! I am at 79% Health and getting the Condition: Replace Soon warning. So I went out to the closest Apple Store and bought a replacement. I would like to get the same lifespan out of my replacement if possible. My question is: The battery comes with a 2 dot (green) charge on the indicator, should I put the battery in and let it run down and do a full recharge OR begin charging it immediately and then let it run all the way to empty and recharge?

    Read the article

  • Should I use UPS with my laptop to extend the battery life?

    - by Mehper C. Palavuzlar
    I recently bought a new Vaio laptop which has a 4400 mAh battery. I will use it at home most of the time, and I have an unused uninterruptible power supply (UPS). Should I remove the battery and connect my laptop to UPS? If I do that, how much does it effect the battery's life? I know that batteries stand longer when kept in half-charged in a cool place, but their lifespan decreases in time due to age effect as well. So is it worth to keep the battery out of my laptop, use a UPS instead and mount it when necessary? or should I continue to work with battery mounted?

    Read the article

  • The Social Business Thought Leaders - Steve Denning

    - by kellsey.ruppel
    How is the average organization doing? Not very well according to a number of recent books and reports. A few indicators provide quite a gloomy picture: Return on assets and invested capitals dropped to 25% of its value in 1965 in the entire US market (see The Shift Index by John Hagel) Firms are dying faster and faster with the average lifespan of companies listed in the S&P 500 index gone from 67 years in the 1920s to 15 years today (see Creative Disruption by Richard Foster) Employee engagement ratio, a high level indicator of an organization’s health proved to affect performance outcomes, does not exceed on average 20%-30% (see Employee Engagement, Gallup or The Engagement Gap, Towers Perrin) In one of the most enjoyable keynotes of the Social Business Forum 2012, Steve Denning (Author of Radical Management and Independent Management Consultant) explained why this is happening and especially what leaders should do to reverse the worrying trends. In this Social Business Thought Leaders series, we asked Steve to collapse some key suggestions in a 2 minutes video that we strongly recommend. Steve discusses traditional management - that set of principles and practices born in the early 20th century and largely inspired by thinkers such as Frederick Taylor and Henry Ford - as the main responsible for the declining performance of modern organizations. While so many things have changed in the last 100 or so years, most companies are in fact still primarily focused on maximizing profits and efficiency, cutting costs, coordinating individuals top-down through command and control. The issue is, in a knowledge intensive, customer centred, turbulent market like the one we are experiencing, similar concepts are not just alienating employees' passion but also destroying the last source of competitive differentiation left: creativity and the innovative potential. According to Steve Denning, in a phase change from old industrial to a creative, collaborative, knowledge economy, the answer is hidden in a whole new business ecosystem that puts the individual (both the employee and the customer) at the center of the organization. He calls this new paradigm Radical Management and in the video interview he articulates the huge challenges and amazing rewards our enterprises are facing during this inevitable transition.

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • Ruby Rack: startup and teardown operations (Tokyo Cabinet connection)

    - by clint.tseng
    I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term. I also have some Rack middleware like Warden that rely on these model classes. What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan. Thanks!

    Read the article

  • ASP.NET MVC Session usage

    - by Ben
    Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application. However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null). It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it. So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session? Thoughts, suggestions appreciated. Thanks Ben

    Read the article

  • Agile Approach for WCM

    - by cameron.f.logan
    Can anyone provide me with advice, opinions, or experience with using an agile methodology to delivery an enterprise-scale Web Content Management system (e.g., Interwoven TeamSite, Tridion)? My current opinion is that to implement a CM system there is a certain--relatively high--amount of upfront work that needs to happen to make sure the system is going to be scalable and efficient for future projects for the multi-year lifespan an WCM is expected to have. This suggests a hybrid approach at best, if not a more waterfall-like approach. I'm really interested to learn what approaches others have taken. Thanks.

    Read the article

  • How do I assign a non-persistent (in-memory) cookie in ASP.NET?

    - by Jørn Schou-Rode
    The following code will send a cookie to the user as part of the response: var cookie = new HttpCookie("theAnswer", "42"); cookie.Expires = DateTime.Now.AddDays(7); Response.Cookies.Add(cookie); The cookie is of the persistent type, which by most browsers will be written to disk and used across sessions. That is, the cookie is still on the client's PC tomorrow, even if the browser and the PC has been closed in between. After a week, the cookie will be deleted (due to line #2). Non-persistent/in-memory cookies are another bread of cookies, which have a lifespan determined by the duration of the client's browsing session. Usually, such cookies are held in memory, and they are discarded when the browser is closed. How do I assign an in-memory cookie from ASP.NET?

    Read the article

  • How do I branch an individual file in SVN?

    - by Michael Carman
    The subversion concept of branching appears to be focused on creating an [un]stable fork of the entire repository on which to do development. Is there a mechanism for creating branches of individual files? For a use case, think of a common header (*.h) file that has multiple platform-specific source (*.c) implementations. This type of branch is a permanent one. All of these branches would see ongoing development with occasional cross-branch merging. This is in sharp contrast to unstable development/stable release branches which generally have a finite lifespan. I do not want to branch the entire repository (cheap or not) as it would create an unreasonable amount of maintenance to continuously merge between the trunk and all the branches. At present I'm using ClearCase, which has a different concept of branching that makes this easy. I've been asked to consider transitioning to SVN but this paradigm difference is important. I'm much more concerned about being able to easily create alternate versions for individual files than about things like cutting a stable release branch.

    Read the article

  • How to automate login to Google API to get OAuth 2.0 token to access known user account

    - by keyser_sozay
    Ok, so this question has been asked before here. In the response/answer to the question, the user tells him to store the token in the application (session and not db, although it doesn't matter where you store it). After going through the documentation on Google, it seems that the token has an expiration date after which it is no longer valid. Now, we could obviously automatically refresh the token every fixed interval, thereby prolonging the lifespan of the token, but for some reason, this manual process feels like a hack. My questions is: Is this most effective (/generally accepted) way to access google calendar/app data for a known user account by manually logging in and persisting the token in the application? Or is there another mechanism that allows us to programmatically login to this user account and go through the OAuth steps?

    Read the article

  • What's the fastest way to compare two objects in PHP?

    - by johnnietheblack
    Let's say I have an object - a User object in this case - and I'd like to be able to track changes to with a separate class. The User object should not have to change it's behavior in any way for this to happen. Therefore, my separate class creates a "clean" copy of it, stores it somewhere locally, and then later can compare the User object to the original version to see if anything changed during its lifespan. Is there a function, a pattern, or anything that can quickly compare the two versions of the User object? Option 1 Maybe I could serialize each version, and directly compare, or hash them and compare? Option 2 Maybe I should simply create a ReflectionClass, run through each of the properties of the class and see if the two versions have the same property values? Option 3 Maybe there is a simple native function like objects_are_equal($object1,$object2);? What's the fastest way to do this?

    Read the article

  • Rights Expiry Options in IRM 11g

    - by martin.abrahams
    Among the many enhancements in IRM 11g, we have introduced a couple of new rights expiry options that may be applied to any role. These options were supported in previous versions, but fell into the "advanced configuration" category. In 11g, the options can be applied simply by selecting a check-box in the properties of a role, as shown by the rather extreme example below, where the role allows access for just two minutes after they are sealed. The new options are: To define a role that expires automatically some period after it is assigned To define a role that evaluates expiry relative to the time that each document is sealed These options supplement the familiar options to allow open-ended access (limited by offline access and the ever-present option to revoke rights at any time) and the option to define time windows with specific start dates and end dates. The value of these options is easiest to illustrate with some publishing examples: You might define a role with a one year expiry to be assigned to users who purchase a one year subscription. For each individual user, the year would be calculated from the time that the role was assigned to them. You might define a role that allows documents to be accessed only for 24 hours from the time that they are published - perhaps as a preview mechanism designed to tempt users to sign up for a full subscription. Upon payment of a full fee, users can simply be reassigned a role that gives them greater access to exactly the same documents. In a corporate environment, you might use such roles for fixed term contractors or for workflows that involve information with a short lifespan, or perhaps as part of a compliance process that requires rights to be formally re-approved at intervals. Being role-based, the time constraints apply to any number of documents - including documents that have not yet been created. For example, a user with a one year subscription would have access to all documents published in the relevant classification during the year without any further configuration. Crucially, unlike other solutions, it is not the documents that expire, but the rights of particular users. Whereas some solutions make documents completely inaccessible for all users after expiry, Oracle IRM can allow some users to continue using documents while other users lose access. Equally crucially, a user whose rights have expired can always be granted fresh rights at any time - for example, because they renew their subscription or because a manager confirms that they still need the rights as part of a corporate compliance process. By applying expiry to rights rather than to documents, Oracle IRM avoids the risk of locking an organization out of its own information.

    Read the article

  • Should I install ubuntu on USB instead of HDD dual-boot?

    - by user2147243
    I had Ubuntu 12.04 installed as dual-boot OS on top of Vista on my laptop. Hacked the grub settings to default to Vista (instead of the default Ubuntu -- pain) on startup, and all was OK for occasional Ubuntu use for past 6 months. Then last week I got a strange message about 'lack of disk space' (~50MB free) when installing pxyplot, even though there was still about 6GB free disk space when I checked later. Then today the Ubuntu wouldn't load at all, and checking the HDD partitions in Vista it looked like the 15GB Ubuntu partition was now three smaller partitions! So, I got rid of those partitions and expanded the Vista partition to use the reclaimed space. Now can't restart ('grub rescue' appears and doesn't 'rescue' anything), so I'll have to do a boot recovery using a Vista installation CD. (Not a particularly user-friendly failure mode of the dual-boot installation!) I now have to decide to either a) try installing ubuntu on the HDD again, but don't want to stuff up my Vista ever again, as that is my most used OS, or b) install Ubuntu on a 16GB USB 3.0 stick. Apparently performance from USB won't be as good as from HDD, and running OS from USB stick does lots of r/w so the stick may fail after a few years! Perhaps installing Ubuntu on live USB and setup to then run in RAM would alleviate the performance/USB lifespan problems? If I create a live-USB for Ubuntu OS, will it boot off that when I restart the laptop with it plugged in? Or will I have to change the laptop setting for boot-order whenever I want to boot Ubuntu instead of Vista (that would be even more painful than the grub default boot order putting Ubuntu ahead of the existing Vista OS!) -- update: I recovered my Vista setup using Iolo SystemMechanic Disaster Recovery Tool, and created a bootable USB of Ubuntu 13.10 on an 8GB USB3.0 pendrive, with 4GB of 'persistence' to allow saving of settings, install some packages etc. It worked OK for a couple of test boots, but once I changed the time and desktop wallpaper, the next Ubuntu reboot crashed and I then couldn't get it to boot successfully. So I decided to install Ubuntu 12.04 LTS as a dual-boot again, but this time instead of partitioning the HDD and installing from an ISO DVD I used the wubi.exe tool to install Ubuntu as a dual-boot. Worked very well, although one oddity was that, despite asking how big the make the partition (20GB), the installed Ubuntu appears to be happily installed somewhere within the Vista NTFS file system (no partition shows up in Windows disk manager, and in Ubuntu disk management tool the entire 133 GB of HDD is showing, with ~40GB free space). A nice feature of installing the dual-boot using wubi is that the laptop now uses Windows boot manager on startup, with Vista as the default OS and Ubuntu happily listed as second on the list. So far so good.

    Read the article

  • MacBook Pro battery capacity 65K mAh

    - by Alexander Gladysh
    I have a 15" MacBook Pro 3.1 (that is Late 2007 model AFAIR). I've bought it new a couple of years ago. Recently its on-battery power lifespan became very short (30 to 10 minutes). When my notebook turns itself off due to "low battery" and I press the small button on the battery itself, all LED lights are alight, indicating full charge. When I plug in the power adapter, my Mac displays that "battery is fully charged, finishing charging process" (I have a Russian OS X 10.5.7, so that is a rough translation), but the LEDs on battery itself display (seemingly accurate) status that there are one or two "LEDs still not charged". My battery have as few as 37 recharge cycles (yes, I've neglected calibration over the time I've used it). Battery info programs like iBatt2 report battery capacity of 65 337 mAh (with by-design capacity of 5600 mAh). I get it that something went wrong with battery electronics. I've tried resetting my Mac's PRAM and SMC, it did not changed anything. Now I'm trying to recalibrate the battery, but looks like it does not help as well. Will try to recalibrate it several times in a row. I'd buy a new battery if I knew if it is battery fault, not a notebook's. Any suggestions? Update: After recalibration, my battery status now displays battery capacity of 1500 mAh. But with every recalibration (or simply when I use notebook without power adapter plugged in) this number changes in the range from 200 mAh to 1700 mAh. LEDs on battery now are synchronous with what nodebook thinks on the charge level. Also I've noticed that cycle count changes rather slowly. It is now 39, it was 37 when I've started recalibration, and I went through the process at least ten times... So, the main question is: does it look like that replacing the battery would help me (or does it look like this is notebook's problem)? I guess I should try replacing the battery.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Tweeting about Oracle Applications Usability: Points to Consider

    - by ultan o'broin
    Here are a few pointers to anyone interested in tweeting about Oracle Applications usability or user experience (UX). These are based on my own experiences and practice, and may not necessarily reflect the views of Oracle, of course (touché, see the footer). If you are an Oracle employee and tweet about our offerings, then read up and follow the corporate social media policy. For the record, I tweet under the following account names: @ultan, @localization, @gamifyOracle, and @usableapps. The last two are supposedly Oracle subject-dedicated, but I mix it up on occassion. Fill out your Twitter account profile, and add a profile picture too. Disclose your interest. Don’t leave either the profile or image blank if you want to be taken seriously (or followed by me). Don’t tweet from a locked down Twitter account, as the message cannot be circulated to anyone who doesn't follow you. Open up the account if you really want to get that UX message out. Stay on message. The usable apps website, Misha Vaughan's VoX blog, and the Oracle Applications blog are good sources of UX messages and information, but you can find many other product team, individual, and corporate-wide sources with a little bit of searching. Set up a Google Alert with pertinent related keywords to get a daily digest of new information right in your inbox. Be original about it. Add your own insight and wit to the message, were relevant. Just circulating and RTing stock headlines adds no value to your effort or to the reader, and is somewhat lazy, in my opinion. Leave room for RTing of your tweet. So, don’t max out those 140 characters. Keep it under 130 if you want to be RTed without modification (or at all-I am not a fan of modifying tweets [MT], way too much effort for the medium). Remove articles and punctuation marks and use fragments, abbreviations, and so on at will to keep the tweet short enough, but leave keywords intact, as people search on those. Follow any Fusion UX Advocates who are on Twitter too (you can search for these names), and not just Oracle employees. Don't just follow people you like or think like you, or those who you think like you or are like-minded. Take a look at who is following or being followed by other tweeters and er, follow up. Create and socialize others to use an easily remembered or typed hashtag, or use what’s already popularized (for an event or conference, for example). We used #gamifyOracle for the applications UX gamification design jam, and other popular applications UX ones are #fusionapps and #usableapps (or at least I’m trying to popularize it). But, before you start the messaging, if you want to keep a record of the hashtag traffic, then set it up with an archiving service. Twitter’s own tweet lifespan is short. Don't mix up hashtags (#) with Twitter handles (@) that have the same name. Sending a tweet to @gamifyOracle will just be seen by @gamifyOracle (me) and any followers we have in common. Sending it to #gamifyOracle is seen by anyone following or searching for that hashtag. No dissing the competition. But there is no rule about not following them on Twitter to see the market reactions to Oracle announcements and this can even let you can tailor your own message accordingly. Don’t be boring. Mix it up a bit. Every 10th or so tweet, divert into other areas of interest, personal ones, even. No constant “I just received K+ in this and that” or “I just checked into wherever” on foursquare pouring into the Twittersteam, please. I just don’t care and will probably unfollow such people pretty quickly. And now, your Twitter tips and experiences with this subject? Them go in the comments...

    Read the article

  • Obtaining MFC Feature Pack GUI elements in .NET WinForms

    - by Cody Gray
    The MFC Feature Pack (and VS 2010) adds out-of-the-box support for several "modern" GUI elements (such as MDI with tabbed documents, the ribbon, and a Visual Studio-style interface with docking panels). These are a boon to those of us that have to support legacy MFC-based applications and want to update their look-and-feel, and a sign that Microsoft has not completely abandoned unmanaged C++ development. However, with the push so strongly in favor of .NET, WinForms, and managed code (and for plenty of good reasons), there seems little reason to develop new applications in unmanaged C++/MFC. The question then becomes how does one obtain these GUI elements in a WinForms application. Almost all of the add-ons and libraries I have found so far cost money, and introduce additional dependencies. I don't have a budget to buy third-party libraries, and the controls provided by Microsoft in MFC for free seem sufficient for our needs. But I still have reservations about learning MFC to develop a new application. Not only does the investment in time seem significant (by all accounts, MFC seems particularly difficult to learn, even for experienced .NET developers--although I am willing to try), but the question of MFC's lifespan is raised as well. Certainly, given the millions of lines of code and existing apps written in native C++, it will be around for some time, but the handwriting seems to be on the wall, so to speak, that it's no longer Microsoft's touted development platform. It seems like these features should be available by now in WinForms without the need for third-party add-ons, or devoting a lot of time and resources to custom-drawing EVERYTHING. Am I just missing something? I find very little online that compares these new features of MFC to what is available in WinForms, mainly because most everything written on MFC pre-dated its most recent update, before which it looked admitted "dated," and with its other flaws, was hardly an appealing platform for new development. With the very recent release of VS 2010, we have a while to wait before WinForms gets updated again. What routes are you guys taking for applications whose customers demand a modern-looking UI on a budget?

    Read the article

< Previous Page | 1 2 3  | Next Page >