Search Results

Search found 1125 results on 45 pages for 'bhoomi nature'.

Page 25/45 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • JavaScript prototype(ing) question

    - by OneNerd
    Trying to grasp Prototyping in Javascript. Trying to create my own namespace to extend the String object in JavaScript. Here is what I have so far (a snippet): var ns { alert: function() { alert (this); } } String.prototype.$ns = ns; As you can see, I am trying to place what will be a series of functions into the ns namespace. So I can execute a command like this: "hello world".$ns.alert(); But the problem is, the this doesn't seem to reference the text being sent (in this case, "hello world"). What I get is an alert box with the following: [object Object] Not having a full grasp of the object-oriented nature of JavaScript, I am at a loss, but I am guessing I am missing something simple. Does anyone know how to achieve this? Thanks -

    Read the article

  • Authenticating model - best practices

    - by zerkms
    I come into ASP.NET from php so the reason why i ask my question is because it's totally different nature of how application works and handles requests. well, i have an exists table with user creditians, such as: id, login, password (sha hashed), email, phone, room i have built custom membership provider so it can handle my own database authentication schema. and now i'm confused, because User.Identity.Name contains only user's login, but not the complete object (i'm using linq2sql to communicate with database and i need in it's User object to work). at php applications i just store user object at some static method at Auth class (or some another), but here at ASP.NET MVC i cannot do this, because static member is shared across all requests and permanent, and not lives within only current request (as it was at php). so my question is: how and where should i retrieve and store linq2sql user object to work with it within current and only current request? (after request processed successfully i expect it will be disposed from memory and on next request will be created again). or i'm following totally wrong way?

    Read the article

  • Best way to save info in hash

    - by qwertymk
    I have a webpage that the user inputs data into a textarea and then process and display it with some javascript. For example if the user types: _Hello_ *World* it would do something like: <underline>Hello</underline> <b>World</b> Or something like that, the details aren't important. Now the user can "save" the page to make it something like site.com/page#_Hello_%20*World* and share that link with others. My question is: Is this the best way to do this? Is there a limit on a url that I should be worried about? Should I do something like what jsfiddle does? I would prefer not to as the site would work offline if the full text would be in the hash, and as the nature of the site is to be used offline, the user would have to first cache the jsfiddle-like hash before they could use it. What's the best way to do this?

    Read the article

  • What do I need to know before working on an IM application?

    - by John
    I'm looking into building an IM-type application using Java stack (for the server at least). I'd be interested to see any information/advice on how applications like Skype/AIM/MSN work, as well as know any technologies/APIs that might be relevant. Without giving away the idea itself, it's perhaps more akin to Google Wave than Skype, but information useful for either is very welcome. Specific points I have already thought of include: Server Vs P2P... for reasons of logging my system will require all communication to go through a central server. Is this how other IM tools work... especially when audio/video comes into the equation? Cross-communication with other systems. Are there APIs for this or do all IM providers work hard to keep their protocol secret? The nature of what I'm designing means integration could probably only be limited, but it definitely seems worthwhile from a business perspective

    Read the article

  • VS2010 (older) installer project - two or more objects have the same target location.

    - by Hamish Grubijan
    This installer project was created back in 2004 and upgraded ever since. There are two offending dll files, which produce a total of 4 errors. I have searched online for this warning message and did not find a permanent fix (I did manage to make it go away once until I have done something like a clean, or built in Release, and then in Debug). I also tried cleaning, and then refreshing the dependencies. The duplicated entries are still in there. I also did not find a good explanation for what this error means. Additional warnings are of this nature: Warning 36 The version of the .NET Framework launch condition '.NET Framework 4' does not match the selected .NET Framework bootstrapper package. Update the .NET Framework launch condition to match the version of the .NET Framework selected in the Prerequisites Dialog Box. So, where is this prerequisites box? I want to make both things agree on .Net 4.0, just having a hard time locating both of them.

    Read the article

  • Find and list all functions/methods in a set of JavaScript files

    - by Dan Milstein
    Is there a way to read a set of JavaScript files, and output a description of where every function/method is defined? I realize that this is likely impossible in full generality, due to the extreme dynamic nature of the language. What I'm imagining is something which gets the (relatively) straightforward cases. Ideally, I'd want it figure out where, e.g. some method got attached to string or hash or some other fundamental class (and also just let you find all the classes/functions that get defined once in one place). Does such a tool exist?

    Read the article

  • C#: reading in a text file more 'intelligently'

    - by DarthSheldon
    I have a text file which contains a list of alphabetically organized variables with their variable numbers next to them formatted something like follows: aabcdef 208 abcdefghijk 1191 bcdefga 7 cdefgab 12 defgab 100 efgabcd 999 fgabc 86 gabcdef 9 h 11 ijk 80 ... ... I would like to read each text as a string and keep it's designated id# something like read "aabcdef" and store it into an array at spot 208. The 2 issues I'm running into are: I've never read from file in C#, is there a way to read, say from start of line to whitespace as a string? and then the next string as an int until the end of line? given the nature and size of these files I do not know the highest ID value of each file (not all numbers are used so some files could house a number like 3000, but only actually list 200 variables) So how could I make a flexible way to store these variables when I don't know how big the array/list/stack/etc.. would need to be.

    Read the article

  • I'm storing click coordinates in my db and then reloading them later and showing them on the site wh

    - by trainbolt
    That's it basically. Storing the click coordinates is obviously the simple step, but once I have them if the user comes back and their window is smaller or larger the coordinates are wrong. Am I going about this in the wrong way, should I also store an element id/dom reference or something of that nature. Also, this script will be run over many different websites with more than one layout. Is there a way to do this where the layout is independent of how the coordinates are stored? Thanks.

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • SQLAuthority News – Milestone of 1300th Post and A Few Updates

    - by pinaldave
    Today is my 1300th blog post and I realize that my blog has been quite running such a long journey. I have been writing for a lengthy time on this tech blog. Today I would like to go back and briefly recall the posts that were part of my blog’s history. Read all list of all my blog posts here. This blog only started as a list of personal bookmarks. I used to just write down scripts on the blog for my personal use. I was the one who wrote many scripts here for the servers that I was maintaining to keep them polished. I have included many links in my first blog posts which I view as just a collection of bookmarks on my very own blog; no intentions of publishing other contents besides the scripts, at all. Gradually, I realized that people read my blog and follow the advices which were supposedly meant only for me. I tried to write a code and a script which are generic in nature, so anyone can just use it right away. Nothing is perfect. When I was writing the last 1299 posts (and having 14 Million+ views), I have made a few mistakes and tweaks that I thoughtfully accepted. These are corrections that were pointed out by many kind souls and readers like you, which have helped me develop wonderful blogging experiences. I am very glad that I have this blog wherein I can express myself. After all, I would have not reached where I am today if I have kept myself worried in terms of expressing my knowledge and understanding SQL Server. I am happy that many of you appreciated my efforts and supported me all the way, which also helped me achieve where I am now. I promise to learn more about this fascinating subject and, of course, continue to share whatever I will learn to my dear readers. Again, I really thank YOU for reading this blog and supporting the SQL community. Reference: Pinal Dave (http://blog.SQLAuthority.com), Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Milestone

    Read the article

  • An increase to 3 Gig of RAM slows down Ubuntu 10.04 LTS

    - by williepabon
    I have Ubuntu 10.04 running from an external hard drive (installed on an enclosure) connected via USB port. Like a month or so ago, I increased RAM on my pc from 2 Gigs to 3 Gigs. This resulted on extremely long boot times and slow application loads. While I was understanding the nature of my problem, I posted various threads on this forum ( Questions # 188417, 188801), where I was advised to gather speed tests, and other info on my machine. I was also suggested that I might have problems with the RAM installed. Initially, I did not consider that possibility because: 1) I did a memory test with a diagnostic program from DELL (My pc is from Dell) 2) My pc works fine with Windows XP (the default OS), no problems with memory 3) My pc works fine when booting with Ubuntu 10.10 memory stick, no speed problems 4) My pc works fine when booting with Ubuntu 11.10 memory stick, no speed problems Anyway, I performed the memory tests suggested. But before doing it, and to check out any possibility of hardware issues on the hard drive, I did the following: (1) purchased a new hard drive enclosure and moved my hard drive to it, (2) purchased a new USB cable and used it to connect my hard drive/enclosure setup to a different USB port on my pc. Then, I performed speed tests with 1 Gig, 2 Gigs and 3 Gigs of RAM with my Ubuntu 10.04 OS. Ubuntu 10.04 worked well when booted with 1 Gig or 2 Gigs of RAM. When I increased to 3 Gigs, it slowed down to a crawl. I can't understand the relationship between an increase of 1 Gig and the effect it has in Ubuntu 10.04. This doesn't happen with Ubuntu 10.10 and 11.10. Unfortunately for me, Ubuntu 10.04 is my principal work operating system. So, I need a solution for this. Hardware and system information: DELL Precision 670 2 internal SATA Hard drives Audigy 2 ZS audio system Factory OS: Windows XP Professional SP3 NVidia 8400 GTS video card More info: williepabon@WP-WrkStation:~$ uname -a Linux WP-WrkStation 2.6.32-38-generic #83-Ubuntu SMP Wed Jan 4 11:13:04 UTC 2012 i686 GNU/Linux williepabon@WP-WrkStation:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.4 LTS Release: 10.04 Codename: lucid Speed test with the 3 Gigs of RAM installed: williepabon@WP-WrkStation:~$ sudo hdparm -tT /dev/sdc [sudo] password for williepabon: /dev/sdc: Timing cached reads: 84 MB in 2.00 seconds = 41.96 MB/sec Timing buffered disk reads: 4 MB in 3.81 seconds = 1.05 MB/sec This is a very slow transfer rate from a hard drive. I will really appreciate a solution or a work around for this problem. I know that that there are users that have Ubuntu 10.04 with 3 Gigs or more of RAM and they don't have this problem. Same question asked on Launchpad for reference.

    Read the article

  • SQLAuthority News – Social Media Series – LinkedIn and Professional Profile

    - by pinaldave
    Pinal Dave on LinkedIn! It seems like a few year ago, there was a big “boom” in social media websites.  All of a sudden there were so many sites to choose from.  MySpace or Orkut?  Blogging websites for your business or a LinkedIn account?  The nature of the internet is to always be changing, but I believe that out of this huge growth of websites, a few have come to stay.  Facebook is obviously the leader in social media networking, especially for your personal life.  Blogging is great, but it can be more of a way to get your ideas out there, rather than a place for people to connect to you professionally.  If you want to have a professional “face” on the internet, LinkedIn is the way to go. LinkedIn is best explained as “professional Facebook.”  This is simplifying things a little bit too much, but it is certainly a website where you link up with professional contacts, so that others can see where you have worked, who you have worked with, and what projects you have done.  This is a much better place for professional contacts to find you than someplace like Facebook, where all they will see is your face and maybe picture of you at a birthday party or something like that! Because so much of my SQL Server life is conducted on the internet, especially on my blog, I felt that it would be a good idea to have a well-maintained LinkedIn web page as well, so that if anyone is curious about me and my credentials they can quickly and easily find me and see that I am for real, and not someone pretending to know a lot about SQL Server. My linked in profile is www.linkedin.com/in/pinaldave.  I keep all my professional information here, and I update it as often as possible.  Feel free to come find me, especially if you would like to “link up” and share professional information.  The technology world is becoming more and more interconnected, and more and more international.  I feel that it is very important to stay linked up virtually, because so many of us are so far apart physically. I try to keep very connected with my LinkedIn profile.  I let anyone connect with me, and I read updates from the professional world very often.  I keep this profile updated, but do not post things about my personal life or anything that I might put on Twitter, for example.  I also include my e-mail address here, if you would like to contact me professionally.  This is the best place for me to conduct business. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • A frequently updated mixed bag blog OR several seldom updated niche sites?

    - by Melanie
    Background I am a member of the website HubPages where I have about a hundred articles (and I'm always writing more.) Anyway, HubPages revenue model is 40% ad-share for them and 60% ad share for users. While the userbase there is really friendly, the site is REALLY slow, buggy and there is a ton of content on HubPages that is copied from other sources. Upon flagging these articles it takes a ton of time for mods to remove it and it's just generally dragging down my stuff. Furthermore, HubPages was hit really hard by Google's Panda Update: http://www.google.com/search?hl=en&rlz=1B3GGLL_enUS426US426&tbm=nws&q=google+panda& Aside from the temporary problems I would deal with when removing content from HubPages and putting it on my own domain (duplicate content, etc) I have another problem. Which would be the best for my articles? I have tons of articles in a wide variety of niches and would like to do what would help them perform the best. I'm not a huge niche writer and have received wide criticism from the HubPages community for my articles not performing as well as they could because I don't use enough keywords within the text of my articles. I prefer to write more naturally in a way that would appeal to an audience instead of keyword stuff. Anyway, this is aside the point. My Question After removing my articles from HubPages, should I put them on one domain or spread them across multiple domains grouped sort of by topic. For example: a-bunch-of-articles.com OR travel-articles.com and financial-articles.com and knitting-articles.com (I know those domains aren't available, but it's just kind of an example.) Here are the pros and cons of each: a mixed bag site like a-bunch-of-articles.com may not perform as well because of its mixed-bag nature a mixed bag site would be updated far more frequently than several niche sites... some niche sites may be updated so infrequently that a year could pass before one sees a new article a mixed bag site would be like putting all my eggs is one basket, where having several niche sites would spread out my portfolio, so to speak. a mixed bag site would be cheaper, $14 (two year registration) to start out with and hosting and I'm good to go. a mixed bag site wouldn't allow me to easily target keywords, but then again isn't HubPages pretty much a mixed bag site?

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • The softer side of BPM

    - by [email protected]
    BPM and RTD are great complementary technologies that together provide a much higher benefit than each of them separately. BPM covers the need for automating processes, making sure that there is uniformity, that rules and regulations are complied with and that the process runs smoothly and quickly processes the units flowing through it. By nature, this automation and unification can lead to a stricter, less flexible process. To avoid this problem it is common to encounter process definition that include multiple conditional branches and human input to help direct processing in the direction that best applies to the current situation. This is where RTD comes into play. The selection of branches and conditions and the optimization of decisions is better left in the hands of a system that can measure the results of its decisions in a closed loop fashion and make decisions based on the empirical knowledge accumulated through observing the running of the process.When designing a business process there are key places in which it may be beneficial to introduce RTD decisions. These are:Thresholds - whenever a threshold is used to determine the processing of a unit, there may be an opportunity to make the threshold "softer" by introducing an RTD decision based on predicted results. For example an insurance company process may have a total claim threshold to initiate an investigation. Instead of having that threshold, RTD could be used to help determine what claims to investigate based on the likelihood they are fraudulent, cost of investigation and effect on processing time.Human decisions - sometimes a process will let the human participants make decisions of flow. For example, a call center process may leave the escalation decision to the agent. While this has flexibility, it may produce undesired results and asymetry in customer treatment that is not based on objective functions but subjective reasoning by the agent. Instead, an RTD decision may be introduced to recommend escalation or other kinds of treatments.Content Selection - a process may include the use of messaging with customers. The selection of the most appropriate message to the customer given the content can be optimized with RTD.A/B Testing - a process may have optional paths for which it is not clear what populations they work better for. Rather than making the arbitrary selection or selection by committee of the option deeped the best, RTD can be introduced to dynamically determine the best path for each unit.In summary, RTD can be used to make BPM based process automation more dynamic and adaptable to the different situations encountered in processing. Effectively making the automation softer, less rigid in its processing.

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

  • Defining Discovery: Core Concepts

    - by Joe Lamantia
    Discovery tools have had a referencable working definition since at least 2001, when Ben Shneiderman published 'Inventing Discovery Tools: Combining Information Visualization with Data Mining'.  Dr. Shneiderman suggested the combination of the two distinct fields of data mining and information visualization could manifest as new category of tools for discovery, an understanding that remains essentially unaltered over ten years later.  An industry analyst report titled Visual Discovery Tools: Market Segmentation and Product Positioning from March of this year, for example, reads, "Visual discovery tools are designed for visual data exploration, analysis and lightweight data mining." Tools should follow from the activities people undertake (a foundational tenet of activity centered design), however, and Dr. Shneiderman does not in fact describe or define discovery activity or capability. As I read it, discovery is assumed to be the implied sum of the separate fields of visualization and data mining as they were then understood.  As a working definition that catalyzes a field of product prototyping, it's adequate in the short term.  In the long term, it makes the boundaries of discovery both derived and temporary, and leaves a substantial gap in the landscape of core concepts around discovery, making consensus on the nature of most aspects of discovery difficult or impossible to reach.  I think this definitional gap is a major reason that discovery is still an ambiguous product landscape. To help close that gap, I'm suggesting a few definitions of four core aspects of discovery.  These come out of our sustained research into discovery needs and practices, and have the goal of clarifying the relationship between discvoery and other analytical categories.  They are suggested, but should be internally coherent and consistent.   Discovery activity is: "Purposeful sense making activity that intends to arrive at new insights and understanding through exploration and analysis (and for these we have specific defintions as well) of all types and sources of data." Discovery capability is: "The ability of people and organizations to purposefully realize valuable insights that address the full spectrum of business questions and problems by engaging effectively with all types and sources of data." Discovery tools: "Enhance individual and organizational ability to realize novel insights by augmenting and accelerating human sense making to allow engagement with all types of data at all useful scales." Discovery environments: "Enable organizations to undertake effective discovery efforts for all business purposes and perspectives, in an empirical and cooperative fashion." Note: applicability to a world of Big data is assumed - thus the refs to all scales / types / sources - rather than stated explicitly.  I like that Big Data doesn't have to be written into this core set of definitions, b/c I think it's a transitional label - the new version of Web 2.0 - and goes away over time. References and Resources: Inventing Discovery Tools Visual Discovery Tools: Market Segmentation and Product Positioning Logic versus usage: the case for activity-centered design A Taxonomy of Enterprise Search and Discovery

    Read the article

  • How to bill a client for frequently-interrupted time

    - by Greg
    I find that when I'm working on hourly-billable projects (in particular, those that are research/design/architecture-oriented as opposed to straight coding) that I'm easily distracted by any number of things (email, grab a drink (loss of focus, but nature happens), link off the webpage I was reading, wandering mind (easy when the job calls for a lot of thinking), etc.) This results in very fragmented time, far too incremental IMO to accurately track with a timeclock, and some time very gray. I frequently end up billing for only some fraction of the elapsed time I spent in order to feel fair, but sometimes it takes a really long time to put in an 8-hour day. By contrast, when I've worked for salary I've not worried about whether I'm actively working at any given minute, I just get the job done, and I've never had anything but stellar reviews/feedback from past salaried employers, so I think I get the job done well. I personally believe in an 80/20 cycle: I get 80% of my work done during an inspired 20% of my time. But I have to screw around the other 80% of the time in order to get that first 20%. So the question: what billing/time-tracking policy can I adopt in order to be fair to my hourly customers without having to write off my own less-productive 80% that a salaried employer is willing to overlook in light of the complete package? Note: This question is not about how to be more productive or focused. It's about how to work around whatever salient limitations that I have in a way that's both fair to me and to my customers. Update: A little clarification (to pre-emptively stop some righteous indignation): I currently have a half dozen different project/client groups. It's not a great situation and I'm working at reducing it down to two, but that's my current reality. It's very easy to get off on a thread related to a different project than the one I'm clocking, and I'm not always conscious of it at the time. [I did not intend the question to mean that I was off playing games or making personal calls, etc., and have adjusted wording above to be clearer. Most of the time. I am only human, and sometimes the mind does force you to take a break! :-)]

    Read the article

  • Should I be wary of signing a non-disclosure agreement with someone I just met?

    - by Thomas Levine
    tl;dr: Some guy I just met says he wants me to join his company. Before he shows me what they do, he wants me to sign a non-disclosure agreement. Is this weird? I'm traveling right now. Someone who saw me coding seemed to think I was smart or something and started talking with me. He explained that he owns a software company, told me a bit about what it does and told me that he was looking for a programmer who would work for a stake in the company. He explained that the company's product is being developed rather secretly, so he couldn't tell me much. But he did tell enough about the product to convince me that he's not completely making this up, which is a decent baseline. He suggested that he show me more of what he's been working on and, after seeing that, I decide whether I want to join. Because of the secrecy behind the product, he wants me to sign a non-disclosure agreement before we talk. I'm obviously somewhat skeptical because of the random nature by which we met. In the short term, I'm wondering if I should be wary of signing such an agreement. He said it would be easier to show me the product in person rather than over the internet, and I'm leaving town tomorrow, so I'd have to figure this out by tomorrow. If I decide to talk with him, I could decide later whether I trust that it's worth spending any time on this company. The concept of being able to avoid telling a secret seems strange to me for the same reason that things like certain aspects of copyright seem strange. Should I be wary of signing a non-disclosure agreement? Is this common practice? I don't know the details of the agreement of which he was thinking, (If I end up meeting with him, I'll of course read over the agreement before I decide whether to sign it.) so I could consider alternatives according to the aspects of the agreement. Or I could just consider the case of an especially harsh agreement. This question seems vaguely related. Do we need a non-disclosure agreement (NDA)? Thanks

    Read the article

  • The Legend of the Filtered Index

    - by Johnm
    Once upon a time there was a big and bulky twenty-nine million row table. He tempestuously hoarded data like a maddened shopper amid a clearance sale. Despite his leviathan nature and eager appetite he loved to share his treasures. Multitudes from all around would embark upon an epiphanous journey to sample contents of his mythical purse of knowledge. After a long day of performing countless table scans the table was overcome with fatigue. After a short period of unavailability, he decided that he needed to consider a new way to share his prized possessions in a more efficient manner. Thus, a non-clustered index was born. She dutifully directed the pilgrims that sought the table's data - no longer would those despicable table scans darken the doorsteps of this quaint village. and yet, the table's veracious appetite did not wane. Any bit or byte that wondered near him was consumed with vigor. His columns and rows continued to expand beyond the expectations of even the most liberal estimation. As his rows grew grander they became more difficult to organize and maintain. The once bright and cheerful disposition of the non-clustered index began to dim. The wait time for those who sought the table's treasures began to increase. Some of those who came to nibble upon the banquet of knowledge even timed-out and never realized their aspired enlightenment. After a period of heart-wrenching introspection, the table decided to drop the index and attempt another solution. At the darkest hour of the table's desperation came a grand flash of light. As his eyes regained their vision there stood several creatures who looked very similar to his former, beloved, non-clustered index. They all spoke in unison as they introduced themselves: "Fear not, for we come to organize your data and direct those who seek to partake in it. We are the filtered index." Immediately, the filtered indexes began to scurry about. One took control of the past quarter's data. Another took control of the previous quarter's data. All of the remaining filtered indexes followed suit. As the nearly gluttonous habits of the table scaled forward more filtered indexes appeared. Regardless of the table's size, all of the eagerly awaiting data seekers were delivered data as quickly as a Jimmy John's sandwich. The table was moved to tears. All in the land of data rejoiced and all lived happily ever after, at least until the next data challenge crept from the fearsome cave of the unknown. The End.

    Read the article

  • Not Dead, Just Busy

    - by MOSSLover
    So I didn’t die in a freak smelting accident yet, but I have been dealing with a lot of different things.  I had to take a bit of a break to deal with the cat death issue.  I am not fully recovered, because well it just happened a few months ago.  It kind of sucked.  Plus the apartment feels a lot bigger. Then you have the whole New York Comic Con thing where I had to plan some cosplay costumes.  I have been trying to find time to hang out with friends and have a social life.  That plus I built an entire presentation for iOS development for New York Code Camp.  I am also planning a couple MS Community dinners (namely one a week from Tuesday) plus a give camp.  I am also planning a vacation around SPS UK plus I will be at SPC.  Life is just incredibly hectic and when you factor in dating to the mix it’s gotten insane to the point where some day I just have to go dark.  Hence the lack of blogging.  I am just trying to keep up with everything and everyone without losing myself. If you guys will be at SPC or SPS UK I will be at both places this year.  Stop by the Planet Technologies booth and see me or I’ll be around somewhere.  I am really sorry if I don’t remember you from an event or if you are someone following me on twitter.  I am trying to get better at the mnemonic memory devices, but I think things broke down around the 47th event I attended or spoke at or something to that nature.  If anyone wants to talk to Cathy, Lori, or I about Women in SharePoint definitely find us at the event.  Anyway good night and good luck guys.  I promise to check back at least once before the year ends.  In the meantime twitter stalking is always possible.  Sometimes I even respond back. Technorati Tags: SPC,SPS UK,NYCC,NYC Code Camp,MOSSLover

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >