Search Results

Search found 29837 results on 1194 pages for 'number to word'.

Page 540/1194 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • How do I morph between meshes that have different vertex counts?

    - by elijaheac
    I am using MeshMorpher from the Unify wiki in my Unity project, and I want to be able to transform between arbitrary meshes. This utility works best when there are an equal number of vertices between the two meshes. Is there some way to equalize the vertex count between a set of meshes? I don't mean that this would reduce the vertex count of a mesh, but would rather add redundant vertices to any meshes with smaller counts. However, if there is an alternate method of handling this (other than increasing vertices), I would like to know.

    Read the article

  • Unity iOS optimization and draw calls

    - by vzm
    I am curious of what methods I should approach in optimizing my Unity project for iOS hardware. I have very little image effects running (directional light with low res shadows) and I used the combine children script from the standard assets to lessen the load on the CPU. My project currently runs with 45-57 draw calls at non-intensive segments and up to 178 at intensive segments. I heard that static batching relieves some of the stress, but the game has the environment moving around the player instead of the player moving around the environment. Is there any alternative that I may look towards to improving the draw call number?

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Suggestions on Calculating Advertising Rates [closed]

    - by Jonathan Wood
    Possible Duplicate: How do I set a price for advertising on my website? I recently launched a new developer website, Black Belt Coder. The site is only seeing about 150 visitors per day but, given that it's only been live two months, I feel I'm off to a good start. I'd like to start monetizing the site as soon as possible but consider AdSense and other programs largely a waste of time. I'd like to deal directly with the advertisers but have no idea what to charge. Are there any guidelines for determining advertising rates based on visitors or page views? I don't expect to do PPC. I want to charge a the time period for banner ads, knowing that advertisers will be keeping an eye on the number of clicks.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Slides and Scripts from Metalogix Webcast Master Your SharePoint Migration With PowerShell

    - by Brian Jackett
    Thanks to everyone who attended the Metalogix webcast “Master Your SharePoint Migration with PowerShell” I guest presented on today.  We had great attendance and no technical hitches which is always a plus.  A number of attendees asked for my slide deck which you can find at the link below.  As a bonus I am including a set of demo scripts that I typically use with the longer version of this presentation.  If you have any questions or comments please feel free to reach out to me.  A big thanks once again to Metalogix for giving me the opportunity to work with them. Scripts and Slidedeck Click Here         -Frog Out

    Read the article

  • Do you contribute to open-source software?

    - by pablo
    Recently John Resig (creator of the jQuery library) wrote on his twitter that "When it comes to hiring, I'll take a Github commit log over a resume any day.". As much as I respect that (motivating developers to give back to the community), it also puzzles me, as not all of us have the opportunity to do so, for a number of reasons (family, employer agreements, etc). How would you make a case for the non-contributor developer? Do you think that developers that do not contribute to open-source software are doomed from certain kinds of organizations?

    Read the article

  • Webcast - C-level Perspectives on How HR Can Take on a Bigger Role in Strategic Planning

    - by Scott Ewart
    The Economist Intelligence Unit (EIU), on behalf of IBM and Oracle, recently surveyed a number of C-level executives in North America and Western Europe to understand how HR can take on a bigger role in driving growth. The resulting reports highlight the actions senior HR leaders can take to place themselves at the heart of the debate on a company's strategic direction.In this session, IBM and Oracle HCM specialists will review the findings of the EIU research reports and provide guidance on how technology innovation can help to align talent strategies with long term business goals. Participants will gain an understanding of the following: Results of the Economist Intelligence Unit study around "Executive Perceptions of the HR Function" Differences in perspective between CEOs and CFOs Identify how the HR professional can take a bigger role in driving business growth Join us on Thursday, October 25 for a live webcast. Speakers:Gina Wells Global Oracle HCM LeaderIBM Global Business Services Michelle NewellSenior Director, HCM Applications MarketingOracle Register Here For the Webcast on Thursday, October 25.

    Read the article

  • Identity verification

    - by acjohnson55
    On a site that I'm working on, I'm trying to find ways of enforcing a one-person, one-account rule. In general, we'd like to do this by providing options to authenticate users with third-party services that provide this assurance. For example, it's possible to authenticate with Facebook and check whether the user is considered "verified" by Facebook (which means they must have provided either a phone number or credit card). This is roughly the level of identity verification we require--we're not doing banking or anything like that. But we want to give the user options. My question is, who else, besides Facebook, provides this? (uncertain of the proper SE forum, please comment if there's a better SE site to ask this)

    Read the article

  • Where can I find game postmortems with a programmer perspective [on hold]

    - by Ken
    There are a number of interesting game post-mortems in places like GDC vault or gamastura.com. The post-mortems are generally give with a CEO/manager perspective or a designer perspective, or, more often a combination of both e.g DOOM postmortem But I have not been able to find many post-mortems which are primarily from the programmers perspective. I'm looking for discussions and rational for technical choices and tradeoffs and how technical problems were overcome. The motivation here is to learn what kind of problems real game programmers encounter and how they go about solving them. A perfect example of what I'm looking for is Renaud Bédard's excellent GDC talk on the development of Fez, "Cubes all the way down". Where can I find more like that?

    Read the article

  • sun.com - SMTP 521

    - by alexismp
    reason: 521 5.0.0 messages are no longer accepted for sun.com It's been planned for a while now - sun.com email addresses are no longer accepted and no longer forwarded to oracle.com. So check you contacts and update old Sun email addresses. While this will probably cut down the spam for a number of us you may need the new stable email address - most Oracle email addresses use the same first.last @ oracle.com pattern (but there are a few homonyms in a company with 100k+ employees). If you need to contact us (TheAquarium), the email address is in the "Contact Us" section on the blog.

    Read the article

  • Delivering estimates and client expectations?

    - by FishOrDie
    When a client asks for an estimate on how long it would take to develop different sections of an app, is it best to give them a total amount or what it would take for each section? Is it better/more common to give a range of hours/days or just a single number? Do you think most clients feel that if a programmer says it should take 50 hours that they should be billed for 50 hours? If I say it would take 50 and it actually takes 60, do I tell them in advance that I'm going over on my estimate or just charge what was originally quoted?

    Read the article

  • How do I hire testers by giving them a buggy app for testing their efficiency?

    - by Jay
    My boss wants to recruit testers based on their testing efficiency (number of bugs identified). So, he's shortlisted 5 people and I need to give them an app full of bugs and see how they fare in reporting obvious bugs, and hidden bugs. I know.... it kind of sounds weird. I guess, this is just like the coding world, where you hire a programmer by assessing his/her programming ability (which is a little easier). Once hired, these testers would be testing a java swing app, so their familiarity of testing frameworks/tools is not really required. So, my question here is - How do I go about finding buggy apps (web/non-web), preferably java ones, that I can have the shortlisted testers have a go at? How would you go about this task if your boss asks you to do so? I am kind of clueless at this point - I googled a bit, thought about finding new apps on sourceforge with lots of bugs, but both approaches didn't work for me.

    Read the article

  • Why is chunk size often a power of two?

    - by danijar
    There are many Minecraft clones out there and I am working on my own implementation. A principle of terrain rendering is tiling the whole world in fixed size chunks to reduce the effort of localized changes. In Minecraft the chunk size is 16 x 16 x 256 as far as I now. And in clones I also always saw chunk sizes of a power of the number 2. Is there any reason for that, maybe performance or memory related? I know that powers of 2 play a special role in binary computers but what has that to do with the chunk size?

    Read the article

  • Scrum - how to carry over a partially complete User Story to the next Sprint without skewing the backlog

    - by Nick
    We're using Scrum and occasionally find that we can't quite finish a User Story in the sprint in which it was planned. In true Scrum style, we ship the software anyway and consider including the User Story in the next sprint during the next Sprint Planning session. Given that the User Story we are carrying over is partially complete, how do we estimate for it correctly in the next Sprint Planning session? We have considered: a) Adjusting the number of Story Points down to reflect just the work which remains to complete the User Story. Unfortunately this will mess up reporting the Product Backlog. b) Close the partially-completed User Story and raise a new one to implement the remainder of that feature, which will have fewer Story Points. This will affect our ability to retrospectively see what we didn't complete in that sprint and seems a bit time consuming. c) Not bother with either a or b and continue to guess during Sprint Planning saying things like "Well that User Story may be X story points, but I know it's 95% finished so I'm sure we can fit it in."

    Read the article

  • How to to let Google know about dynamic content?

    - by Yaniv
    Im looking for the best practice to let Google know about a vast number of dynamically created content. Let's say (I mean - dream) that I'm Facebook, and I want to let Google to index all the users' posts. Sitemap.xml may be the answer for this but they are limited to 50,000 URLs in each site map. I know that I can create 500 sitemaps and create a sitemap for sitemaps, but they are also limited, 25,000,000 URLS sounds quite enough at the moment, but could cause problems in the future. I.E - stackoverflow already has 3 Million posts, probably sitemap is not the solution for them. Creating a page with paging, and links to all the dynamic data. i guess this is what stackoverflow did by creating this page here: http://stackoverflow.com/questions So I think that Option 2 is the answer, but it seems to me that sitemaps might have some added value. So what should i do?

    Read the article

  • Social media exchange strategies

    - by Wladimir Ivanov
    Recently I've stumbled upon some [B]facebook/twitter/g+[/B] and other social site [B]tools[/B] which offer [B]like for like[/B]. As I know from personal experimenting following certain people/pages on twitter also gives you followers. What's your opinion on this type of social media exchange (I know the fans/followers you get are only number which couldn't help much with growing your site)? Which of these sites are proven to boost some statistics? Are there other better exchange tactics? Thanks in advance.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • How to reduce errors in dynamic language such as python, and improve my code quality

    - by Martin Luo
    I post the origin question in stackoverflow, some people suggest me to post here I've always have trouble with dynamic language like Python. Several problems: Typo error, I can use pylint to reduce some of these errors. But there's still some errors that pylint can not figure out. Object type error, I often forgot what type of the parameter is, int? str? some object? Also, forgot the type of some object in my code. Unit test might help me sometimes, but I'm not always have enough time to do UT. When I need a script to do a small job, the line of code are 100 - 200 lines, not big, but I don't have time to do the unit test, because I need to use the script as soon as possible. So, many errors appear. So, any idea on how to reduce the number of these problems?

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • What steps to take in resolving/fixing/optimizing a long boot, with possible looping errors as the culprit

    - by Tchalvak
    So my boot time has been slowing and slowing as time has gone on... I am running a number of services (e.g. apache/mysql, postgresql), but it has seen a drastic slowing lately, while I've only been applying updates as normal. I happened to check out my /var/log/boot.log and it is spammed with many lines of this: init: upstart-udev-bridge main process (2738) terminated with status 1 init: upstart-udev-bridge main process ended, respawning I wasn't able to find any solutions to that issue in google, or much talk of it at all, and I'm not really certain that error is the problem, but it is the only lead that I have. What steps should I go through to diagnose boot problems/a slow bootup?

    Read the article

  • Why would a web site keep my signup information for a limited time only?

    - by Alois Mahdal
    I have just created account at (some web service, well, actually it was Transifex, a localization service). Registration form requested typical things: accont name, e-mail adress, password (twice), and, optional company name and phone number. What confused me was this sentence on confirmation page (the one right after submitting the form): We will store your signup information for 7 days on our server. Can anybody explain what does this mean? What exactly they are referring to by "signup information", if it's something that should be kept for only 7 days? Or is my account going to be destroyed after that time? (Well, that could make sense for some special services, but not for this one.)

    Read the article

  • asp.net website development component / APIs

    - by Haseeb Asif
    I have been assigned a new website project to work on in my organization where my role demand to finalize all the tools/technologies/controls/api etc. That website will something like online store, where every user has his online store as subdomain e.g. user1.myprojectdomain.com I have been researching a number of things to use and need your suggestions in following levels ASP.NET web forms vs Asp.net MVC: Prefering asp.net webforms due to following reason with N Tier Architecture Rapid application Development large set of Toolbox/controls And mainly due to our team skill set Errorlogging Elmah seems to be a nice library Forums Forums Yetanotherforums On line Live Chat still looking for something (Working on SignalR) Signups with Social Media Engage by Janrain And I need help that how can Manage sub domains. Do we create a Virtual Directory/application for every user in the IIS on runtime or we can do some thing else

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >