Search Results

Search found 4493 results on 180 pages for 'price comparison'.

Page 157/180 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Subterranean IL: The ThreadLocal type

    - by Simon Cooper
    I came across ThreadLocal<T> while I was researching ConcurrentBag. To look at it, it doesn't really make much sense. What's all those extra Cn classes doing in there? Why is there a GenericHolder<T,U,V,W> class? What's going on? However, digging deeper, it's a rather ingenious solution to a tricky problem. Thread statics Declaring that a variable is thread static, that is, values assigned and read from the field is specific to the thread doing the reading, is quite easy in .NET: [ThreadStatic] private static string s_ThreadStaticField; ThreadStaticAttribute is not a pseudo-custom attribute; it is compiled as a normal attribute, but the CLR has in-built magic, activated by that attribute, to redirect accesses to the field based on the executing thread's identity. TheadStaticAttribute provides a simple solution when you want to use a single field as thread-static. What if you want to create an arbitary number of thread static variables at runtime? Thread-static fields can only be declared, and are fixed, at compile time. Prior to .NET 4, you only had one solution - thread local data slots. This is a lesser-known function of Thread that has existed since .NET 1.1: LocalDataStoreSlot threadSlot = Thread.AllocateNamedDataSlot("slot1"); string value = "foo"; Thread.SetData(threadSlot, value); string gettedValue = (string)Thread.GetData(threadSlot); Each instance of LocalStoreDataSlot mediates access to a single slot, and each slot acts like a separate thread-static field. As you can see, using thread data slots is quite cumbersome. You need to keep track of LocalDataStoreSlot objects, it's not obvious how instances of LocalDataStoreSlot correspond to individual thread-static variables, and it's not type safe. It's also relatively slow and complicated; the internal implementation consists of a whole series of classes hanging off a single thread-static field in Thread itself, using various arrays, lists, and locks for synchronization. ThreadLocal<T> is far simpler and easier to use. ThreadLocal ThreadLocal provides an abstraction around thread-static fields that allows it to be used just like any other class; it can be used as a replacement for a thread-static field, it can be used in a List<ThreadLocal<T>>, you can create as many as you need at runtime. So what does it do? It can't just have an instance-specific thread-static field, because thread-static fields have to be declared as static, and so shared between all instances of the declaring type. There's something else going on here. The values stored in instances of ThreadLocal<T> are stored in instantiations of the GenericHolder<T,U,V,W> class, which contains a single ThreadStatic field (s_value) to store the actual value. This class is then instantiated with various combinations of the Cn types for generic arguments. In .NET, each separate instantiation of a generic type has its own static state. For example, GenericHolder<int,C0,C1,C2> has a completely separate s_value field to GenericHolder<int,C1,C14,C1>. This feature is (ab)used by ThreadLocal to emulate instance thread-static fields. Every time an instance of ThreadLocal is constructed, it is assigned a unique number from the static s_currentTypeId field using Interlocked.Increment, in the FindNextTypeIndex method. The hexadecimal representation of that number then defines the specific Cn types that instantiates the GenericHolder class. That instantiation is therefore 'owned' by that instance of ThreadLocal. This gives each instance of ThreadLocal its own ThreadStatic field through a specific unique instantiation of the GenericHolder class. Although GenericHolder has four type variables, the first one is always instantiated to the type stored in the ThreadLocal<T>. This gives three free type variables, each of which can be instantiated to one of 16 types (C0 to C15). This puts an upper limit of 4096 (163) on the number of ThreadLocal<T> instances that can be created for each value of T. That is, there can be a maximum of 4096 instances of ThreadLocal<string>, and separately a maximum of 4096 instances of ThreadLocal<object>, etc. However, there is an upper limit of 16384 enforced on the total number of ThreadLocal instances in the AppDomain. This is to stop too much memory being used by thousands of instantiations of GenericHolder<T,U,V,W>, as once a type is loaded into an AppDomain it cannot be unloaded, and will continue to sit there taking up memory until the AppDomain is unloaded. The total number of ThreadLocal instances created is tracked by the ThreadLocalGlobalCounter class. So what happens when either limit is reached? Firstly, to try and stop this limit being reached, it recycles GenericHolder type indexes of ThreadLocal instances that get disposed using the s_availableIndices concurrent stack. This allows GenericHolder instantiations of disposed ThreadLocal instances to be re-used. But if there aren't any available instantiations, then ThreadLocal falls back on a standard thread local slot using TLSHolder. This makes it very important to dispose of your ThreadLocal instances if you'll be using lots of them, so the type instantiations can be recycled. The previous way of creating arbitary thread-static variables, thread data slots, was slow, clunky, and hard to use. In comparison, ThreadLocal can be used just like any other type, and each instance appears from the outside to be a non-static thread-static variable. It does this by using the CLR type system to assign each instance of ThreadLocal its own instantiated type containing a thread-static field, and so delegating a lot of the bookkeeping that thread data slots had to do to the CLR type system itself! That's a very clever use of the CLR type system.

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • CS, SE, HCI, Information Science, Please recommendation for further education of the former performing art manager seeking career in IT industries? [on hold]

    - by Baek Seungjoo
    IT specialists there J Thank you very much for your collective efforts here, and I got huge help reading your professional comments and advices on each questions I have searched so far! This time, I would like to ask for your practical advices or recommendation on what I am struggling on at this moment. I am currently seeking higher education for my career transition from performing art manager and director to “IT software and/or service development and management specialist”. However, as this field is quite new to me, and there are lots of different work positions, I have no idea which grad major I better pursue in order to get qualification. Of course I know this question could sounds wired as it is kind of personal choice. But my lack of understanding on how IT software companies work in general, your practical and experience-based advice will be great help to me, who spent more than two months of self-research on net. OK. Before my question, here is my plan and history, which are quite different from those currently in IT industry I think… 1) Target Firstly, get career transition into IT service or products companies and get experiences. Eventually, pursue IT entrepreneurship in combination with my arts and cultural production and business expertise. 2) Background Career: performing arts director and manager in theatre-based scale opera and musical Art education in youth BA in literature and Chinese studies (Art & Humanities) MA in Cultural & Creative Industries (Art & Humanities) – dissertation with focus on digital prosumption and the lived experience of the prosumer. (a qualitative research on the agents in the digital world) 2) Personally Huge interest in IT hardware and software, and their trend. Skills to build up, repair, tune PCs -of course this is no more than personal hobby, but shows my interests in this field. 4) Problem Encounter a question “So, what do you think you can contribute practically in this position”. This question turn me down everytime I go through job interviews, and I decided more education in the relevant area. Here are my questions. 1) In terms of work positions in IT software companies, I wonder if I can put the comparison of what “Artists” is to “Arts Manager or Director” is what “Developer” is to “Product Manager”. (Of course, this stereotypical division of Artist-Art Manager is out of sense because the domain overlaps to some extent, and is blurring at least in my field, and they are in different contexts, but just speaking easily.) Normally, artist comes with special arts educations, and they live in their own world of artistic inspiration and creation, and they feel alive in practice and on stages. Meanwhile, from the point of staging and managing productions, the role of art manager is critical as well. Our role cares how the production appeals to the audience in effective way, how to make profit and future sustainable management through that, how to set up future strategy in consideration of the external conditions such as political and social circumstances, audience trend and level, other production trends from on-going and historical perspectives, how and what the production make voice to the society from political, economic, humanitarian stances. So, we need keen eyes on economic, political, and societal environment, have to understand human-being and their desires, must know how to make presentation and attract investors, must have sense in managing and fighting over the limited financial resource, how to extend networking and so on. It is common that the two agents create productions in collaboration (normally not in that ideal way but in conflict and fight though J ). So, we need to know each other’s expertise to some extent, for better production. What are the work positions in IT software industries equivalent to the role of “art manager” in performing arts? From my view, considering developers come with special education in the world of computer science, software engineering, or others (self-education sometimes), and they express themselves with the arts of coding, computer languages on the black screen, and make sort of their artistic production online to the audience, I guess there might be someone who collaborate with developers in creating, managing, and launching IT services or products. 2) Which education among CS, SE, HCI, Information Science, is needed for those seeking such work position? Especially for person like me. (At this moment, Information Science has the highest possibility to get in, since I lack Calculus and Math in undergrad educaiton. But please let me know irrespective of this concern, I think there are ways to back it up if CS or SE education needed in my case) 3) Which field between Information Science and HCI can be more practical background regarding job hungting? And which of them have more demands in job market? AS I checked, HCI is more close to CS than IS in its focus of study area. Thank you very much for your patience reading such a long inquiry, and I appreciate to your efforts in advance. Have a nice day in this beautiful summer.

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • k-d tree implementation [closed]

    - by user466441
    when i run my code and debugged,i got this error - this 0x00093584 {_Myproxy=0x00000000 _Mynextiter=0x00000000 } std::_Iterator_base12 * const - _Myproxy 0x00000000 {_Mycont=??? _Myfirstiter=??? } std::_Container_proxy * _Mycont CXX0017: Error: symbol "" not found _Myfirstiter CXX0030: Error: expression cannot be evaluated + _Mynextiter 0x00000000 {_Myproxy=??? _Mynextiter=??? } std::_Iterator_base12 * but i dont know what does it means,code is this #include<iostream> #include<vector> #include<algorithm> using namespace std; struct point { float x,y; }; vector<point>pointleft(4); vector<point>pointright(4); //we are going to implement two comparison function for x and y coordinates,we need it in calculation of median (we should sort vector //by x or y according to depth informaton,is depth even or odd. bool sortby_X(point &a,point &b) { return a.x<b.x; } bool sortby_Y(point &a,point &b) { return a.y<b.y; } //so i am going to implement to median finding algorithm,one for finding median by x and another find median by y point medianx(vector<point>points) { point temp; sort(points.begin(),points.end(),sortby_X); temp=points[(points.size()/2)]; return temp; } point mediany(vector<point>points) { point temp; sort(points.begin(),points.end(),sortby_Y); temp=points[(points.size()/2)]; return temp; } //now construct basic tree structure struct Tree { float x,y; Tree(point a) { x=a.x; y=a.y; } Tree *left; Tree *right; }; Tree * build_kd( Tree *root,vector<point>points,int depth) { point temp; if(points.size()==1)// that point is as a leaf { if(root==NULL) root=new Tree(points[0]); return root; } if(depth%2==0) { temp=medianx(points); root=new Tree(temp); for(int i=0;i<points.size();i++) { if (points[i].x<temp.x) pointleft[i]=points[i]; else pointright[i]=points[i]; } } else { temp=mediany(points); root=new Tree(temp); for(int i=0;i<points.size();i++) { if(points[i].y<temp.y) pointleft[i]=points[i]; else pointright[i]=points[i]; } } return build_kd(root->left,pointleft,depth+1); return build_kd(root->right,pointright,depth+1); } void print(Tree *root) { while(root!=NULL) { cout<<root->x<<" " <<root->y; print(root->left); print(root->right); } } int main() { int depth=0; Tree *root=NULL; vector<point>points(4); float x,y; int n=4; for(int i=0;i<n;i++) { cin>>x>>y; points[i].x=x; points[i].y=y; } root=build_kd(root,points,depth); print(root); return 0; } i am trying ti implement in c++ this pseudo code tuple function build_kd_tree(int depth, set points): if points contains only one point: return that point as a leaf. if depth is even: Calculate the median x-value. Create a set of points (pointsLeft) that have x-values less than the median. Create a set of points (pointsRight) that have x-values greater than or equal to the median. else: Calculate the median y-value. Create a set of points (pointsLeft) that have y-values less than the median. Create a set of points (pointsRight) that have y-values greater than or equal to the median. treeLeft = build_kd_tree(depth + 1, pointsLeft) treeRight = build_kd_tree(depth + 1, pointsRight) return(median, treeLeft, treeRight) please help me what this error means?

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • The Social Enterprise: Gangnam Style

    - by Mike Stiles
    Are only small and medium businesses able to put social strategies in place, generate consistent, compelling content for customers, and be nimble enough to listen and respond to the social communities they build? Or are enterprise organizations eagerly and effectively adopting social as well? It depends on whom inside the organization you ask. A study from Attensity looked at who “gets” social inside enterprise organizations. The results were unsurprising. Mostly, Generation X and Y employees who came of age with social as part of their lives and as a key communications vehicle understand it. Imagine being a 25-year-old at a company that bans employees from accessing Facebook at work. You may as well tell them they can’t use phones and must do all calculations on an abacus. To them, such policy is absent of real-world logic and signals to them the organization is destined to be the victim of an up-and-comer. After that, it’s senior management that gets social. You don’t get to be in senior management without reading a few things and paying attention. Most senior managers are well aware of the impact social has had and will have, though they may be unsure of what to do about it. The better ones will utilize those on the inside who do inherently know how to communicate and build virtual relationships using social. The very best will get the past out of the way for these social innovators, so the new communications can be enacted minus counterproductive dictums, double-clutching, meeting-creep, and all the other fading internal practices that water down content and impede change. Organizationally, the Attensity study found 81% of enterprise companies believe failing to embrace social will result in their being left behind. Yet our old friend fear still has many captive in its clutches. 79% feel overwhelmed by the volume of social data available, something a social technology partner with goal-oriented analytics expertise could go a long way toward alleviating. Then there’s the fear of social having a negative impact. This comes from a lack of belief in the product, the customer service, or both. The public uses social not to go out and slay brands. They’re using it to be honest. If the fear is that honesty will reflect badly on the brand, the brand has much bigger, broader problems than what happens on Facebook. Sadly, most enterprise organizations still see social as a megaphone, a one-way channel with which to hit people with ads. They either don’t understand social relationships, or don’t want any. The truly unenlightened manager will always say, “We help them by selling them our stuff.” “Brand affinity” is a term, it’s just not one assigned much value in enterprise organizations. Which brings us to Psy, the Korean performer whose Internet video phenom “Gangnam Style,” as of this writing, has been viewed 438,550,238 times on YouTube. It’s bigger than anything a brand will probably ever publish. Most brands would never have seen the point of making or publishing it. But a funny thing happened on the way to Internet success. The video literally doubled the stock price of Psy’s father’s software firm. NH Investment and Securities said, "The positive sentiment has attracted investors just because of the fact the company is owned by Psy's father and uncle.” The company wasn’t mentioned or seen in the video in any way, yet reaped tangible rewards just for being tangentially associated with it. Imagine your brand being visibly and directly responsible for such a smash and tell me it’s worthless. When enterprise organizations embrace the value of igniting passions, making people happier, solving their problems, informing them, helping them have fun, etc., then they will have fully embraced social, and will reap the brand affinity rewards of heightened awareness, brand loyalty and yes, sales.

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • Formatting made easy - Silverlight 4

    - by PeterTweed
    One of the simplest tasks in business apps is displaying different types of data to be read in the format that the user expects them.  In Silverlight versions until Silverlight 4 this has meant using a Converter to format data during binding.  This involves writing code for the formatting of the data to bind, instead of simply defining the formatting to use for the data in question where you bind the data to the control.   In Silverlight 4 we find the addition of the StringFormat markup extension that allows us to do exactly this.  Of course the nice thing is the ability to use the common formatting conventions available in C# through the String.Format function.   This post will show you how to use three of the common formatting conventions - currency, a defined number of decimal places for a number and a date format.   Steps:   1. Create a new Silverlight 4 application   2. In the body of the MainPage.xaml.cs file replace the MainPage class with the following code:       public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.Loaded += new RoutedEventHandler(MainPage_Loaded);         }           void MainPage_Loaded(object sender, RoutedEventArgs e)         {             info i = new info() { PriceValue = new Decimal(9.2567), DoubleValue = 1.2345678, DateValue = DateTime.Now };             this.DataContext = i;         }     }         public class info     {         public decimal PriceValue { get; set; }         public double DoubleValue { get; set; }         public DateTime DateValue { get; set; }     }   This code defines a class called info with different data types for the three properties.  A new instance of the class is created and bound to the DataContext of the page.   3.  In the MainPage.xaml file copy the following XAML into the LayoutRoot grid:           <Grid.RowDefinitions>             <RowDefinition Height="60*" />             <RowDefinition Height="28*" />             <RowDefinition Height="28*" />             <RowDefinition Height="30*" />             <RowDefinition Height="154*" />         </Grid.RowDefinitions>         <Grid.ColumnDefinitions>             <ColumnDefinition Width="86*" />             <ColumnDefinition Width="314*" />         </Grid.ColumnDefinitions>         <TextBlock Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock1" Text="Price Value:" VerticalAlignment="Top" />         <TextBlock Grid.Row="2" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock2" Text="Decimal Value:" VerticalAlignment="Top" />         <TextBlock Grid.Row="3" Height="23" HorizontalAlignment="Left" Margin="32,0,0,0" Name="textBlock3" Text="Date Value:" VerticalAlignment="Top" />         <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Name="textBlock4" Text="{Binding PriceValue, StringFormat='C'}" VerticalAlignment="Top" Margin="6,0,0,0" />         <TextBlock Grid.Column="1" Grid.Row="2" Height="23" HorizontalAlignment="Left" Margin="6,0,0,0" Name="textBlock5" Text="{Binding DoubleValue, StringFormat='N3'}" VerticalAlignment="Top" />         <TextBlock Grid.Column="1" Grid.Row="3" Height="23" HorizontalAlignment="Left" Margin="6,0,0,0" Name="textBlock6" Text="{Binding DateValue, StringFormat='yyyy MMM dd'}" VerticalAlignment="Top" />   This XAML defines three textblocks that use the StringFormat markup extension.  The three examples use the C for currency, N3 for a number with 3 decimal places and yyy MM dd for a date that displays year 3 letter month and 2 number date.   4. Run the application and see the data displayed with the correct formatting. It's that easy!

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • How Does a 724% Return on Your Salesforce Automation Investment Sound?

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that, a 724% return on investment (ROI) when they implemented these Oracle Cloud solutions in their fast-moving, rapidly-growing business. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Congratulations Apex IT! Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment - 724% Payback - 2 months Average annual benefit - $91,534 Cost: Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Please join us in congratulating Apex IT on this award and their business achievements. Want More Details? Don’t take our word for it. Read the full Apex IT ROI Case Study and learn more about Apex IT’s business—including their work with Oracle Sales and Marketing Cloud on behalf of their clients in leading Sales organizations. Learn More About Oracle Sales Cloud www.oracle.com/salescloud www.facebook.com/oraclesalescloud www.youtube.com/oraclesalescloud Oracle Customer Experience and Complementary Sales Solutions Oracle Configure, Price and Quote (CPQ) Cloud Oracle Marketing Cloud Oracle Customer Experience /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Oracle 11g R2 1???????~?????????????

    - by Yusuke.Yamamoto
    ??2010?11?17???Oracle Database 11g Release2(R2) ???????1???? ????Oracle Database 11g R2 ?????????????????????????? ???98?????????1????????? ???????98?????????????????·?????·??????????! ???? 2010/11/17:????? 2011/01/07:???????(??) ?? Oracle Database 11g R2 ??????? Oracle Database 11g ?????????(????) ??????? Oracle Database 11g R2(???/????) Oracle Database 11g R2 ??????? ?? ??? 2009?11?11? Oracle Exadata Database Machine Version 2 ???? 2009?11?17? Oracle Database 11g R2 ???? 2010?02?01? ?????????????????????????????? 2010?03?31? SAP ? Oracle Database 11g R2 ??????????ISV????????·??????????? 2010?05?18? Windows Server 2008 R2 / Windows 7 ?????????Oracle Database 10g R2 ??? 2010?06?23? Oracle Application Express 4.0 ????(??) 2010?07?09? IDC Japan:2009???? Windows RDBMS ?????????????? 2010?08?17? TPC-C Benchmark Price/Performance ????????(??) 2010?09?13? Patch Set 11.2.0.2 for Linux ????(??) 2010?10?20? Oracle Exadata Database Machine X2 ???? 2010?11?17? Oracle Database 11g R2 ????1?? Oracle Database 11g ?????????(????) ????????????????????????????????(????)? ???? ????? ????(???) ?????·???????·??? ????? ??????????? ???? ??? ????????(???) ?????? Oracle Exadata Database Machine ????? Oracle Database 11g ??(????)? ???? ?????·???????·??? ?? ????????? ?????????? ???? ?????? ????(????·????????) ??????????? ???????????? ????? ???? Customer Voice ????:????IT?????24??365????????????????????? ?Oracle9i Database ?????????????????????Oracle Database 11g ???????????????????????? Oracle9i Database ???????????????? Customer Voice ??????:Oracle Database 11g????????????????????? ?Oracle ASM ???????????????????I/O????????????????????????????????????? ??????? Oracle Database 11g R2(???/????) ???????????????? Oracle 11g R2 ????????? - IT Leaders ??????????11g R2?5???? - ??SE????Oracle??? - Think IT ??????????? Oracle Database 11g Release 2(11gR2) ???????|???????????

    Read the article

  • How could i get selected value from dropdownlist in kendo ui grid in mvc

    - by Karthik Bammidi
    I am working on Kendo UI with asp.net mvc razor. I am trying to bind database table data with kendo grid that supports CRUD operations. Here i need to populate a dropdownlist for one of my table field. I have used the following code View: @model IEnumerable<MvcApplication1.PriceOption> @(Html.Kendo().Grid(Model) .Name("Grid") .Columns(columns => { //columns.Bound(p => p.ProductTitle).ClientTemplate("<input type='checkbox' disabled='disabled'name='Discontinued' <#= Discontinued? checked='checked' : '' #> />"); columns.Bound(p => p.ProductTitle).EditorTemplateName("OptionalEmail"); columns.Bound(p => p.OptionTitle); columns.Bound(p => p.Price); columns.Bound(p => p.Frequency); columns.Command(command => { command.Edit(); command.Destroy(); }).Width(200); }) .ToolBar(toolbar => toolbar.Create()) .Editable(editable => editable.Mode(Kendo.Mvc.UI.GridEditMode.InLine)) .Pageable() .Sortable() .Scrollable() .DataSource(dataSource => dataSource .Ajax() .Events(events => events.Error("error_handler")) .Model(model => model.Id(p => p.ProductID)) .Create(create => create.Action("CreateOption", "ZiceAdmin")) .Read(read => read.Action("Read", "ZiceAdmin")) .Update(update => update.Action("UpdateOption", "ZiceAdmin")) .Destroy(update => update.Action("DeleteOption", "ZiceAdmin")) ) ) OptionalEmail.cshtml @model string @(Html.Kendo().DropDownList() .Name("ProductTitle") .Value(Model) .SelectedIndex(0) .BindTo(new SelectList(ViewBag.ProductTitle)) ) Here i need to store the selected item from the dropdownlist. But it always shows null. How could i get the selected value from dropdownlist.

    Read the article

  • Send large JSON data to WCF Rest Service

    - by Christo Fur
    Hi I have a client web page that is sending a large json object to a proxy service on the same domain as the web page. The proxy (an ashx handler) then forwards the request to a WCF Rest Service. Using a WebClient object (standard .net object for making a http request) The JSON successfully arrives at the proxy via a jQuery POST on the client webpage. However, when the proxy forwards this to the WCF service I get a Bad Request - Error 400 This doesn't happen when the size of the json data is small The WCF service contract looks like this [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Wrapped, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] CarConfiguration CreateConfiguration(CarConfiguration configuration); And the DataContract like this [DataContract(Namespace = "")] public class CarConfiguration { [DataMember(Order = 1)] public int CarConfigurationId { get; set; } [DataMember(Order = 2)] public int UserId { get; set; } [DataMember(Order = 3)] public string Model { get; set; } [DataMember(Order = 4)] public string Colour { get; set; } [DataMember(Order = 5)] public string Trim { get; set; } [DataMember(Order = 6)] public string ThumbnailByteData { get; set; } [DataMember(Order = 6)] public string Wheel { get; set; } [DataMember(Order = 7)] public DateTime Date { get; set; } [DataMember(Order = 8)] public List<string> Accessories { get; set; } [DataMember(Order = 9)] public string Vehicle { get; set; } [DataMember(Order = 10)] public Decimal Price { get; set; } } When the ThumbnailByteData field is small, all is OK. When it is large I get the 400 error What are my options here? I've tried increasing the MaxBytesRecived config setting but that is not enough Any ideas?

    Read the article

  • iphone xcode annotation pin drop with slider value change also remove

    - by chirag
    i have to add annotation pin on location with UIslider Value change .... this is code where i add annotation (MKAnnotationView *) mapView:(MKMapView *)mapView1 viewForAnnotation:(id ) annotation{ MKAnnotationView* annotationView = nil; NSString* identifier = @"Pin"; MyAnnotationView* annView = (MyAnnotationView*)[mapView1 dequeueReusableAnnotationViewWithIdentifier:identifier]; // annotationView.leftCalloutAccessoryView = myImage; //myImage = [UIButton buttonWithType:UIButtonTypeCustom]; // [myImage setImage:[UIImage imageNamed:@"mark.png"]forState:UIControlStateNormal]; //Property_Photo UIButton *mybtn = [UIButton buttonWithType:UIButtonTypeCustom]; if([annotation isKindOfClass:[AddressAnnotation class]]){ AddressAnnotation x=(AddressAnnotation)annotation; mybtn.frame = CGRectMake(0, 0, 35, 35); mybtn.contentVerticalAlignment = UIControlContentVerticalAlignmentCenter; mybtn.contentHorizontalAlignment = UIControlContentHorizontalAlignmentCenter; [mybtn setImage:[UIImage imageNamed:@"btn.png"] forState:UIControlStateNormal]; [mybtn setTitle:[NSString stringWithFormat:@"%@",[x getID]] forState:UIControlStateDisabled]; [mybtn addTarget:self action:@selector(btnShowProperty:) forControlEvents:UIControlEventTouchUpInside]; ((IMOVEISAppDelegate*)[[UIApplication sharedApplication]delegate]).strPropertyPrice = [[myTblArray objectAtIndex:imgIndex]valueForKey:@"Property_Price"]; NSLog(@"property price: %@",((IMOVEISAppDelegate*)[[UIApplication sharedApplication]delegate]).strPropertyPrice); if(nil == annView) { ///if(annView!=nil && [annView retainCount]>0){ [annView release]; annView=nil; } annView = [[[MyAnnotationView alloc] initWithAnnotation:x reuseIdentifier:identifier] autorelease]; if(Objslider.value==10){ [myMapView removeAnnotations:myMapView.annotations]; } } NSURL *imgURL = [NSURL URLWithString:[[myTblArray objectAtIndex:imgIndex]valueForKey:@"Property_Photo"]]; UIImage *imgPhoto = [UIImage imageWithData:[NSData dataWithContentsOfURL:imgURL]]; UIImageView *pinImgView = [[UIImageView alloc]initWithFrame:CGRectMake(0, 0,35, 35)]; imgIndex++; [pinImgView setImage:imgPhoto]; annView.rightCalloutAccessoryView = mybtn; annView.leftCalloutAccessoryView = pinImgView; [annView setBackgroundColor:[UIColor clearColor]]; } [annView setEnabled:YES]; annView.canShowCallout = YES; annView.calloutOffset = CGPointMake(-5, 5); annotationView = annView; return annotationView; }

    Read the article

  • Should I buy Obout? Help, Please.

    - by Ramiz Uddin
    We started a new project and the nature of the project is very interactive and a Rich UI is required. We would need a set of controls that would require for Rich UI development. I found Obout while googling. I never heard about them and never seen fellow members telling me such name except Telerik, ComponentOne, NetAdvantage. These are the famous names we heard but no this one. But, the controls give a positive feeling. But as two things matter always when you are buying some services: How good are their customer support? and How much feasible their price is? Another, How quickly they release patches/updates? As, what if we find a bug or an error during development what will gonna happen? Do they provide a quick solution for this? I hope you understand my query. I'm bit confused making a decision here. I need your assistance, experience and feedback. Please, assist! Thanks.

    Read the article

  • XElement.Load("~/App_Data/file.xml") Could not find a part of the path

    - by mahdiahmadirad
    hi everybody, I am new in LINQtoXML. I want to use XElement.Load("") Method. but the compiler can't find my file. can you help me to write correct path for my XML file? Note that: I defined a Class in App_Code and I want to use the XML file data in one of methods and my XML file Located in App_Data. settings = XElement.Load("App_Data/AppSettings.xml"); i cant Use Request.ApplicationPath and Page.MapPath() or Server.MapPath() to get the physical path for my file because i am not in a class Inherited form Page class. Brief error Message: *Could not find a part of the path 'C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\App_Data\AppSettings.xml'*. you see the path compiled is fully different from my project path(G:\MyProjects\ASP.net Projects\VistaComputer\Website\App_Data\AppSettings.xml) Full error Message is here: System.IO.DirectoryNotFoundException was unhandled by user code Message="Could not find a part of the path 'C:\\Program Files\\Microsoft Visual Studio 9.0\\Common7\\IDE\\App_Data\\AppSettings.xml'." Source="mscorlib" StackTrace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize) at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings) at System.Xml.Linq.XElement.Load(String uri, LoadOptions options) at System.Xml.Linq.XElement.Load(String uri) at ProductActions.Add(Int32 catId, String title, String price, String website, String shortDesc, String fullDesc, Boolean active, Boolean editorPick, String fileName, Stream image) in g:\MyProjects\ASP.net Projects\VistaComputer\Website\App_Code\ProductActions.cs:line 67 at CMS_Products_Operations.Button1_Click(Object sender, EventArgs e) in g:\MyProjects\ASP.net Projects\VistaComputer\Website\CMS\Products\Operations.aspx.cs:line 72 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException:

    Read the article

  • How good is Dotfuscator Community Edition? What is "good enough obfuscator"?

    - by zendar
    I plan to release one small, low priced utility. Since this is more hobby than business, I planned to use Dotfuscator Community Edition that is shipped with VS2008. How good is it? I could also use definition of "good enough obfuscator" - what features are missing from Dotfuscator Community Edition to make it good enough. Edit: I checked pricing on number of commercial obfuscators and they cost a lot. Is it worth it? Are commercial versions that much better protecting from reverse engineering? I'm not very afraid of my application being cracked (it will be disappointing if application is so bad that no one is interested in cracking it). It's not heavily protected anyway, not overly complex serial key and licence checks on few places in code. It just bugs me that without obfuscation, somebody can easily get source code, rebrand it and sell it as its own. Does this happens a lot? Edit 2: Can somebody recommend commercial obfuscator. I found lots of them, all of them are expensive, some even don't have price listed on web site. Feature wise, all products seem more or less similar. What is minimal set of features obfuscator should have?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >