Search Results

Search found 21356 results on 855 pages for 'check digit'.

Page 754/855 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • your AdSense account poses a risk of generating invalid activity

    - by Karington
    i received a mail from the adsense team saying: I am not an adsense expert, im actually quite new to it. I spent a lot of time on my site http://www.media1.rs, its a news aggregator with tons of options. In the meantime i discovered the double click service that had a good option to turn on google ads when you don't have any other running so i joined up for google adsense with my company account. Everything went smooth until one day (21.Jul.2011) i got an email... Hello, After reviewing our records, we've determined that your AdSense account poses a risk of generating invalid activity. Because we have a responsibility to protect our AdWords advertisers from inflated costs due to invalid activity, we've found it necessary to disable your AdSense account. Your outstanding balance and Google's share of the revenue will both be fully refunded back to the affected advertisers. Please understand that we need to take such steps to maintain the effectiveness of Google's advertising system, particularly the advertiser-publisher relationship. We understand the inconvenience that this may cause you, and we thank you in advance for your understanding and cooperation. If you have any questions or concerns about the actions we've taken, how you can appeal this decision, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. Sincerely, The Google AdSense Team At first i didn't have any idea why... but then it came to me that it was maybe the auto refresh script we had because we publish news very very often and it would be useful for visitors... but i removed it immediately after i got the mail... Then i thought it might be my friends clicking thinking that that will help me (i didn't tell them to do it and don't know if they did) or something like that but than it couldn't be that because everyone can organize 10 people and get anyone who is a start-up banned? right? Anyway i filled out the form that was on the answers page with the previously removed script and got this from them: Hello, Thank you for your appeal. We appreciate the additional information you've provided, as well as your continued interest in the AdSense program. However, after thoroughly re-reviewing your account data and taking your feedback into consideration, our specialists have confirmed that we're unable to reinstate your AdSense account. As a reminder, if you have any questions or concerns about your account, the actions we've taken, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. I do understand them that they have to keep things secret in a way but i don't know what I'm supposed to do now? Is there a check list that i can go through and re-apply? Where do i re-apply on the same form? Please help as we are a small company and cant really have a budget for hiring a specialist + don't know any also... p.s. the current ads on the site are my own through doubleclick... Thanks in advance! Best, Karington

    Read the article

  • Simple MVVM Walkthrough – Refactored

    - by Sean Feldman
    JR has put together a good introduction post into MVVM pattern. I love kick start examples that serve the purpose well. And even more than that I love examples that also can pass the real world projects check. So I took the sample code and refactored it slightly for a few aspects that a lot of developers might raise a brow. Michael has mentioned model (entity) visibility from view. I agree on that. A few other items that don’t settle are using property names as string (magical strings) and Saver class internal casting of a parameter (custom code for each Saver command). Fixing a property names usage is a straight forward exercise – leverage expressions. Something simple like this would do the initial job: class PropertyOf<T> { public static string Resolve(Expression<Func<T, object>> expression) { var member = expression.Body as MemberExpression; return member.Member.Name; } } With this, refactoring of properties names becomes an easy task, with confidence that an old property name string will not get left behind. An updated Invoice would look like this: public class Invoice : INotifyPropertyChanged { private int id; private string receiver; public event PropertyChangedEventHandler PropertyChanged; private void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public int Id { get { return id; } set { if (id != value) { id = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Id)); } } } public string Receiver { get { return receiver; } set { receiver = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Receiver)); } } } For the saver, I decided to change it a little so now it becomes a “view-model agnostic” command, one that can be used for multiple commands/view-models. Updated Saver code now accepts an action at construction time and executes that action. No more black magic internal class Command : ICommand { private readonly Action executeAction; public Command(Action executeAction) { this.executeAction = executeAction; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { // no more black magic executeAction(); } } Change in InvoiceViewModel is instantiation of Saver command and execution action for the specific command. public ICommand SaveCommand { get { if (saveCommand == null) saveCommand = new Command(ExecuteAction); return saveCommand; } set { saveCommand = value; } } private void ExecuteAction() { DisplayMessage = string.Format("Thanks for creating invoice: {0} {1}", Invoice.Id, Invoice.Receiver); } This way internal knowledge of InvoiceViewModel remains in InvoiceViewModel and Command (ex-Saver) is view-model agnostic. Now the sample is not only a good introduction, but also has some practicality in it. My 5 cents on the subject. Sample code MvvmSimple2.zip

    Read the article

  • Open a File Browser From Your Current Command Prompt/Terminal Directory

    - by The Geek
    Ever been doing some work at the command line when you realized… it would be a lot easier if I could just use the mouse for this task? One command later, you’ll have a window open to the same place that you’re at. This same tip works in more than one operating system, so we’ll detail how to do it in every way we know how. Open a File Browser in Windows We’ve actually covered this before when we told you how to open an Explorer window from the command prompt’s current directory, but we’ll briefly review: Just type the follow command into your command prompt: explorer . Note: You could actually just type “start .” instead. And you’ll then see a file browsing window set to the same directory you were previous at. And yes, this screenshot is from Vista, but it works the same in every version of Windows. If that wasn’t good enough, you should really read how you can navigate in the File Open/Save dialogs with just the keyboard—now that’s a Stupid Geek Trick! Open a File Browser in Linux For this exercise, we’re going to assume that you’re using Gnome under a Linux flavor like Ubuntu, because that’s the most common. From your terminal window, just type in the following command: nautilus . And the next thing you know, you’ll have a file browser window open at the current location. You’ll see some type of error message at the prompt, but you can pretty much ignore that. You can also use “gnome-open .” if you want. Open Finder in Mac OS X All the Mac computers in this office are running Linux, so we haven’t had a chance to verify, but you should be able to use the following command on OS X to open Finder in the current terminal location: open . Open Dolphin on Linux KDE4 dolphin . Got any extra tips to help out your fellow readers? How do you do the same thing in KDE3? What about OS X? Leave your savvy advice in the comments, and maybe we’ll update the article. Or not. Either way, it’ll help somebody! Similar Articles Productive Geek Tips Keyboard Ninja: Concatenate Multiple Text Files in WindowsStupid Geek Tricks: Open an Explorer Window from the Command Prompt’s Current DirectoryHow to automate FTP uploads from the Windows Command LineShell Geek: Rename Multiple Files At OnceAdd "Open with gedit" to the right click menu in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Problems with 3D Array for Voxel Data

    - by Sean M.
    I'm trying to implement a voxel engine in C++ using OpenGL, and I've been working on the rendering of the world. In order to render, I have a 3D array of uint16's that hold that id of the block at the point. I also have a 3D array of uint8's that I am using to store the visibility data for that point, where each bit represents if a face is visible. I have it so the blocks render and all of the proper faces are hidden if needed, but all of the blocks are offset by a power of 2 from where they are stored in the array. So the block at [0][0][0] is rendered at (0, 0, 0), and the block at 11 is rendered at (1, 1, 1), but the block at [2][2][2] is rendered at (4, 4, 4) and the block at [3][3][3] is rendered at (8, 8, 8), and so on and so forth. This is the result of drawing the above situation: I'm still a little new to the more advanced concepts of C++, like triple pointers, which I'm using for the 3D array, so I think the error is somewhere in there. This is the code for creating the arrays: uint16*** _blockData; //Contains a 3D array of uint16s that are the ids of the blocks in the region uint8*** _visibilityData; //Contains a 3D array of bytes that hold the visibility data for the faces //Allocate memory for the world data _blockData = new uint16**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _blockData[i] = new uint16*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _blockData[i][j] = new uint16[REGION_DIM]; } //Allocate memory for the visibility _visibilityData = new uint8**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _visibilityData[i] = new uint8*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _visibilityData[i][j] = new uint8[REGION_DIM]; } Here is the code used to create the block mesh for the region: //Check if the positive x face is visible, this happens for every face //Block::VERT_X_POS is just an array of non-transformed cube verts for one face //These checks are in a triple loop, which goes over every place in the array if (_visibilityData[x][y][z] & 0x01 > 0) { _vertexData->AddData(&(translateVertices(Block::VERT_X_POS, x, y, z)[0]), sizeof(Block::VERT_X_POS)); } //This is a seperate method, not in the loop glm::vec3* translateVertices(const glm::vec3 data[], uint16 x, uint16 y, uint16 z) { glm::vec3* copy = new glm::vec3[6]; memcpy(&copy, &data, sizeof(data)); for(int i = 0; i < 6; i++) copy[i] += glm::vec3(x, -y, z); //Make +y go down instead return copy; } I cannot see where the blocks may be getting offset by more than they should be, and certainly not why the offsets are a power of 2. Any help is greatly appreciated. Thanks.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • What is the best practice, point of view of well experienced developers

    - by Damien MIRAS
    My manager pushes me to use his self defined best practices. All of these practices are based on is own assumptions. I disagree with them and I would like to have some feedback of well experienced people about these practices. I would prefer answers from people involved in the maintenance of huge products and people whom have maintained the product for years. Developers with 15+ years of experience are preferred because my manager has that much experience himself. I have 7 years of experience. Here are the practices he wants me to use: never extends classes, use composition and interface instead because extending classes are unmaintainable and difficult to debug. What I think about that Extend when needed, respect "Liskov's Substitution Principle" and you'll never be stuck with a problem, but prefer composition and decoration. I don't know any serious project which has banned inheriting, sometimes it's impossible to not use that, i.e. in a UI framework. Design patterns are just unusable. In PHP, for simple use cases (for example a user needs a web interface to view a database table), his "best practice" is: copy some random php code wich we own, paste and modify it, put html and php code in same file, never use classes in PHP, it doesn't work well for small jobs, and in fact it doesn't work well at all, there is no good tool to work with. Copy & paste PHP code is good practice for maintenance because scripts are independent, if you have a bug somewhere you can fix it without side effects. What I think about that: NEVER EVER COPY code or do it because you have five minutes to deliver something, you will do some refactoring after that. Copy & paste code is a beginners error, if you have errors you'll have the error everywhere any time you have pasted it's a nightmare to maintain. If you repsect the "Open Close Principle" you'll rarely get edge effects, use unit test if you are afraid of that. For small jobs in PHP use at least something you get or write the HTML separately from the PHP code and reuse it any time you need it. Classes in PHP are mature, not as mature as other languages like python or java, but they are usable. There is tools to work with classes in PHP like Zend Studio that work very well. The use of classes or not depends not on the language you use but the programming paradigm you have choosen. I'm a OOP developer, I use PHP5, why do I have to shoot myself in the foot? When you find a simple bug in the code, and you can fix it simply, if you are not working on the code where you have found it, never fix it, even if it takes 5 seconds. He says to me his "best practices" are driven by the fact that he has a lot of experience in maintaining software in production (14 years) and he now knows what works and what doesn't work, what the community says is a fad, and the people advocating such principles as never copy & paste code, are not evolved in maintaining applications. What I think about that: If you find a bug fix it if you can do it quickly inform the people who've touched that code before, check if you have not introduced a new bug, ideally add a unit test for it. I currently work on a web commerce project, which serves 15k unique users per day. The code base has to be maintained and has been maintained this way since 2005. Ideally you include a short description of your position and experience in terms of years effectively maintaining an application which has been in production for real.

    Read the article

  • Five Key Trends in Enterprise 2.0 for 2011

    - by kellsey.ruppel(at)oracle.com
    We recently sat down with Andy MacMillan, an industry veteran and vice president of product management for Enterprise 2.0 at Oracle, to get his take on the year ahead in Enterprise 2.0 (E2.0). He offered us his five predictions about the ways he believes E2.0 technologies will transform business in 2011. 1. Forward-thinking organizations will achieve an unprecedented level of organizational awareness. Enterprise 2.0 and Web 2.0 technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. But this year we are anticipating that organizations will go to the next step and integrate social activities with business applications to deliver rich contextual "activity streams." Activity streams are a new way for enterprise users to get relevant information as quickly as it happens, by navigating to that information in context directly from their portal. We don't mean syndicating social activities limited to a single application. Instead, we believe back-office systems will be combined with social media tools to drive how users make informed business decisions in brand new ways. For example, an account manager might log into the company portal and automatically receive notification that colleagues are closing business around a certain product in his market segment. With a single click, he can reach out instantly to these colleagues via social media and learn from their successes to drive new business opportunities in his own area. 2. Online customer engagement will become a high priority for CMOs. A growing number of chief marketing officers (CMOs) have created a new direct report called "head of online"--a senior marketing executive responsible for all engagements with customers and prospects via the Web, mobile, and social media. This new field has been dubbed "Web experience management" or "online customer engagement" by firms and analyst organizations. It is likely to rapidly increase demand for a host of new business objectives and metrics from Web content management solutions. As companies interface with customers more and more over the Web, Web experience management solutions will help deliver more targeted interactions to ensure increased customer loyalty while meeting sales and business objectives. 3. Real composite applications will be widely adopted. We expect organizations to move from the concept of a single "uber-portal" that encompasses all the necessary features to a more modular, component-based concept for composite applications. This approach is now possible as IT and power users are empowered to assemble new, purpose-built composite applications quickly from existing components. 4. Records management will drive ECM consolidation. We continue to see a significant shift in the approach to records management. Several years ago initiatives were focused on overlaying records management across a set of electronic repositories and physical storage locations. We believe federated records management will continue, but we also expect to see records management driving conversations around single-platform content management consolidation. 5. Organizations will demand ECM at extreme scale. We have already seen a trend within IT organizations to provide a common, highly scalable infrastructure to consolidate and support content and information needs. But as data sizes grow exponentially, ECM at an extreme scale is likely to spread at unprecedented speeds this year. This makes sense as regulations and transparency requirements rise. The model in which ECM and lightweight CMS systems provide basic content services such as check-in, update, delete, and search has converged around a set of industry best practices and has even been coded into new industry standards such as content management interoperability services. As these services converge and the demand for them accelerates, organizations are beginning to rationalize investments into a single, highly scalable infrastructure. Is your organization ready for Enterprise 2.0 in 2011? Learn more.

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Roles / Profiles / Perspectives in NetBeans IDE 7.1

    - by Geertjan
    With a check out of main-silver from yesterday, I'm able to use the brand new "role" attribute in @TopComponent.Registration, as you can see below, in the bit in bold: @ConvertAsProperties(dtd = "-//org.role.demo.ui//Admin//EN", autostore = false) @TopComponent.Description(preferredID = "AdminTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "editor", openAtStartup = true, role="admin") public final class AdminTopComponent extends TopComponent { And here's a window for general users of the application, with the "role" attribute set to "user": @ConvertAsProperties(dtd = "-//org.role.demo.ui//User//EN", autostore = false) @TopComponent.Description(preferredID = "UserTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "explorer", openAtStartup = true, role="user") public final class UserTopComponent extends TopComponent { So, I have two windows. One is assigned to the "admin" role, the other to the "user" role. In the "ModuleInstall" class, I add a "WindowSystemListener" and set "user" as the application's role: public class Installer extends ModuleInstall implements WindowSystemListener { @Override public void restored() { WindowManager.getDefault().addWindowSystemListener(this); } @Override public void beforeLoad(WindowSystemEvent event) { WindowManager.getDefault().setRole("user"); WindowManager.getDefault().removeWindowSystemListener(this); } @Override public void afterLoad(WindowSystemEvent event) { } @Override public void beforeSave(WindowSystemEvent event) { } @Override public void afterSave(WindowSystemEvent event) { } } So, when the application starts, the "UserTopComponent" is shown, not the "AdminTopComponent". Next, I have two Actions, for switching between the two roles, as shown below: @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToAdminAction") @ActionRegistration(displayName = "#CTL_SwitchToAdminAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToAdminAction=Switch To Admin") public final class SwitchToAdminAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("admin"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("admin"); } } @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToUserAction") @ActionRegistration(displayName = "#CTL_SwitchToUserAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToUserAction=Switch To User") public final class SwitchToUserAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("user"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("user"); } } When I select one of the above actions, the role changes, and the other window is shown. I could, of course, add a Login dialog to the "SwitchToAdminAction", so that authentication is required in order to switch to the "admin" role. Now, let's say I am now in the "user" role. So, the "UserTopComponent" shown above is now opened. I decide to also open another window, the Properties window, as below... ...and, when I am in the "admin" role, when the "AdminTopComponent" is open, I decide to also open the Output window, as below... Now, when I switch from one role to the other, the additional window/s I opened will also be opened, together with the explicit members of the currently selected role. And, the main window position and size are also persisted across roles. When I look in the "build" folder of my project in development, I see two different Windows2Local folders, one per role, automatically created by the fact that there is something to be persisted for a particular role, e.g., when a switch to a different role is done: And, with that, we now clearly have roles/profiles/perspectives in NetBeans Platform applications from NetBeans Platform 7.1 onwards.

    Read the article

  • No external microphone Acer AO722

    - by Leeghwater
    The ACER AO722 comes with an external mic input, and this input is not recognised by Alsa mixer or Sound (in System Settings). There are various comments on this problem, but no real solutions. For example External Mic not working but Internal Mic works on an Acer Aspiron AO722. Using the internal mic is not an option, as I need to use skype professionally. I have tried everything in alsamixer (accessible through the Terminal Ctrl+Alt+t, command: alsamixer), and in Sound (under System Settings). I have also installed Pulseaudio. But to no avail. The headset is working normally under Skype in Windows. My AO722 came with Windows 7 on it, so I have installed Skype there too. My headset has separate connectors for ears and mic, and these go into the respective output and input on the right side of the laptop. This location: http://bernaerts.dyndns.org/linux/202-ubuntu-acer-ao722 sounds like an effective solution but it is for Ubuntu Natty 11.04. The solution suggested sounds drastic to me: replace the kernel 2.6.38-13 with version 2.6.38-12. I use Ubuntu 12.04, and my kernel is 3.2.0-30-generic-pae. Question: could I try this solution with Ubuntu 12.04? Is this a risky thing to do? I have found harware work around this problem. The audio output seems to be a combi output with also a microphone connection. I have made an adapter for this output. I used a 4 contacts 3,5 mm audio jack plug. To this plug I have soldered 2 female (common stereo) connectors, one for ears and one for the mic of my headset. The 4 contacts jack, which goes into the laptop (in audio OUTput) is wired as follows: tip = hot audio right; first sleeve after tip = hot audio left; second sleeve = common earth (for both ears and microphone); the 3rd sleeve = microphone signal input. In the connector which I could buy, the 3rd sleeve is not so much a sleeve, but part of the metal base of the connector; normally you would expect this one to be connect to earth. But connecting the mic signal to it works. Maybe ready made adapters of this kind and even headsets with a combi jack can simply be purchased; I didn't check. When I plug in the 4 contacts jack, Sound and Alsamixer immediately recognise an external microphone (even if no mic is connected to the adapter). In Sound, under the Input tab, 'Settings for internal microphone' changes into 'Setting for microphone'. The microphone comes through loud and clear, however there is a constant noise in the background. Others have reported this too. If I disconnect the external mic from the adapter, or shortcircuit the external microphone, the noise gets less but does not disappear. Therefore, it is not background noise from the room, but it comes from the computer itself. However, if you talk directly in the microphone of the headset, the noise level is acceptable for VOIP. The headset of my mobile phone Nokia C1 mobile comes wwith a 4 contacts combi 3,5mm jack plug. However, this one works (ear and mic) with the AO722 only if not inserted fully. Possibly the wiring of this headset jack is different. I cannot find detailed specs of the AO722, and don't know whether the audio 'output' was actually designed as a combi input/output. I have seen that at least one other AO model has a combi connector only. In any case, I do not believe that connecting your headset in this way will harm your computer. I would still appreciate a software solution. This must be possible, because the proper microphone input connector works under MS Windows.

    Read the article

  • DATEFROMPARTS

    - by jamiet
    I recently overheard a remark by Greg Low in which he said something akin to "the most interesting parts of a new SQL Server release are the myriad of small things that are in there that make a developer's life easier" (I'm paraphrasing because I can't remember the actual quote but it was something like that). The new DATEFROMPARTS function is a classic example of that . It simply takes three integer parameters and builds a date out of them (if you have used DateSerial in Reporting Services then you'll understand). Take the following code which generates the first and last day of some given years: SELECT 2008 AS Yr INTO #Years UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM   #Years y here are the results: That code is pretty gnarly though with those CONVERTs in there and, worse, if the character string is constructed in a certain way then it could fail due to localisation, check this out: SET LANGUAGE french;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d;SET LANGUAGE us_english;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d; Notice how the datetime has been converted differently based on the language setting. When French, the string "2012-01-02" gets interpreted as 1st February whereas when us_english the same string is interpreted as 2nd January. Instead of all this CONVERTing nastiness we have DATEFROMPARTS: SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),    [LasttDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM   #Years y How much nicer is that? The bad news of course is that you have to upgrade to SQL Server 2012 or migrate to SQL Azure if you want to use it, as is the way of the world! Don't forget that if you want to try this code out on SQL Azure right this second, for free, you can do so by connecting up to AdventureWorks On Azure. You don't even need to have SSMS handy - a browser that runs Silverlight will do just fine. Simply head to https://mhknbn2kdz.database.windows.net/ and use the following credentials: Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly One caveat, SELECT INTO doesn't work on SQL Azure so you'll have to use this instead: DECLARE @y TABLE ( [Yr] INT);INSERT @y([Yr])SELECT 2008 AS Yr UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012;SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),      [LastDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM @y y;SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM @y y; @Jamiet

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Take Snapshots of Your Favorite Movie Scenes in VLC

    - by DigitalGeekery
    Have you ever wanted to grab a screenshot of your favorite TV or movie scene? Today we’ll show you how to do so with VLC Media Player. If you don’t have it already, download and install the latest version of VLC (link below). Start playing your movie, and to grab a snapshot, select Video from the menu and click Snapshot.   When you take a snapshot, by default a preview is displayed at the top left and the folder where the file is saved is briefly displayed on the screen. If you enable the Advanced Controls, you can take a snapshot with a click of a button, and advance the video frame by frame to get a more accurate shot. To enable the Advanced Controls, select View and Advanced Controls.   You’ll see the Advanced Controls buttons appear below the slider. Now just click on the Snapshot button to grab an image.   You can more easily control the frame you wish to grab by pressing the Frame by Frame button. You can pause the movie when it is near the perfect spot for your snapshot, and then press the Frame by Frame button to advance a single frame at a time. By default, the snapshots are saved as PNG files in your My Pictures folder in Windows. You can change those setting in the Preferences. First, you’ll need to select All under Show settings at the bottom. Then click on Video on the left. Scroll down a bit and you’ll see the Snapshot section. Here you can change the format from PNG to JPG, change the directory to which the snapshots are stored, turn on and off the preview, and change the filename prefix. Click Save when finished.   Now you have nice screenshots of your favorite movie to display as you wish…such as a Desktop Background!   VLC is an excellent media player that runs on Windows, Mac, and Linux. In addition to playing almost any media file, it also makes grabbing screenshots of your videos a breeze. Want to know more about VLC? Check out some of our previous articles like how to rip DVDs and how to set a video as your desktop wallpaper. Download the Latest version of VLC Similar Articles Productive Geek Tips Desktop Fun: Rural Scenes WallpapersAutomatically Mount and View ISO files in Windows 7 Media CenterHow to Make/Edit a movie with Windows Movie Maker in Windows VistaAdd Images and Metadata to Windows 7 Media Center Movie LibraryQuickly Find Movies to Watch at Hello Movies TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >