Search Results

Search found 29841 results on 1194 pages for 'random number generator'.

Page 237/1194 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Multi-tenancy - single database vs multiple database

    - by RichardW1001
    We have a number of clients, whose systems share some functionality, but also have quite a degree of diversity. The number of clients is growing - always a healthy thing! - and the diversity between their businesses is also increasing. At present there is a single ASP.Net (Web Forms) Web Site (as opposed to web project), which has sub-folders for each tenant, with that tenant's non-standard pages. There is a separate model project, which deals with database access and business logic. Which is preferable - and most importantly, why - between having (a) 1 database per client, with only the features associated with that client; or (b) a single database shared by all clients, where only a subset of tables are used by any one client. The main concerns within the business are over: maintenance of multiple assets - backups, version control and the like promoting re-use as much as possible How would you ensure these concerns are addressed, which solution is preferable, and why? (I have been also compiling responses to similar questions)

    Read the article

  • How TiVo is messing up customer support.

    - by James Fleming
    Ok,  So I've gotten a TiVo and overall, I'm happy, but there have been issues and I suspect I've a defective unit. - Now the nice folks after many service calls were happy to swap it out, and to ensure continuity of service, they sent me a new unit (after a $109 deposit).  That was yesterday. Today, when we go to watch a little TV, and wait for our replacement unit to arrive we find our TiVo service has been suspended. WTF? They have an exchange program, but your unit your waiting to exchange is as dead as a doornail until the replacement arrives. How hard is it to keep the old unit active for an extra week? Here is the exchange w/Tivo below... You are currently number 1 in the queue. We apologize for the delay. We will assign you to an agent as soon as one is available.The average amount of time a customer has to wait is 00:13.  Kaylene (Listening)  Kaylene: Thank you for contacting TiVo! My name is Kaylene. So that I may better assist you, are you an existing customer?  james Fleming: yes I am, but I'm now having second thoughts about being one    Kaylene: Thank you for verifying your information. How may I assist you today James?  james Fleming: I've been having issues w/a tivo box & I'm getting a replacement sent out to me (after paying an additional deposit) and now my current unit is no longer activated  Kaylene: I can help you today!  Kaylene: When we process an exchange we do transfer over the service to the replacement box so it is active and ready to go when you receive it.  james Fleming: which is to say you also make my current box worthless until such time I receive a new box?!?!?  Kaylene: I apologize that your original box was deactivated so we could activate your replacement box.  james Fleming: Why on Earth would I bother to pay in advance for a new box if you were going to kill my existing box.  Kaylene: What features are you needing to use on your current box?  james Fleming: I need to be able to access my netflix subscription (if I'm lucky enough to have it work without rebooting)  Kaylene: Can I have you verify the TiVo Service Number of your TiVo box please?  james Fleming: 7460011906979b4  Kaylene: We have your current box temporary service but not all features are available with temporary service as it is not paid for service.  Kaylene: If you like I can transfer your service back to your current box for now. Then once you receive the new box you will have to call in and have the service transferred back to the new box.  james Fleming: Not paid for? Let's see> one tivo box + 3 year service plan + monthly service + $109 deposit on a second box = what?  Kaylene: Would you like me to transfer your service back to your current box?  james Fleming: Yes - that would be helpful  Kaylene: All you will need to do is contact us again once you receive the new box so we can transfer it back.  Kaylene: I have put your service back on TiVo box 7460011906979b4.  james Fleming: What would also be helpful is your firm informing me to how you'd be cutting service in the interim.  james Fleming: Again - I opted to pay to have a second box delivered BEFORE returning the box I have - thus trying to have a continuity of service..  Kaylene: This is not something we normally do so it is important when you contact us to transfer the service back to the new box when you receive it that you reference this case number: 110622-006089.  Kaylene: I apologize about the inconvenience. You may need  force a few connections for the box to recognize the service again.  james Fleming: If it's not something you normally do than WHY would you have a $109 fee and a term for the service.  james Fleming: I am not mad at you, but your company is not impressing me and I'm blogging about this experience  Kaylene: Again I apologize about the inconvenience but you should be good to go now. Is there anything else I can help you with today?  james Fleming: so I need to go through the re-actviate process or is that somethign you do  Kaylene: When you receive the new TiVo box you need to contact us so we can transfer the service to the new box for you.  james Fleming: sure  Kaylene: Is there anything else I can help you with today James?  james Fleming: Nope - please email this transcript to me  Kaylene: I apologize but we do not have the ability to e-mail you a copy of this transcript. You can view it online at  http://www.tivo.com when you sign into your account or you can copy and paste it now to save it.  Kaylene: Thank you for contacting TiVo today. Your reference number for our conversation is 110622-006089. You can save this for your records, and if necessary, provide this to a later agent to pull up what we discussed. There will be a brief satisfaction survey emailed to you. We would appreciate any feedback on your TiVo Chat Support experience today.  Kaylene: Thank you for using TiVo Chat and have a great day James! Good-bye.  Kaylene has disconnected.

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • What’s new in IIS8, Perf, Indexing Service-Week 49

    - by OWScott
    You can find this week’s video here. After some delays in the publishing process week 49 is finally live.  This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for asp.net, the ARR Helper, and Indexing Services. Starting this week for the remaining four weeks of the 52 week series I'll be taking questions and answers from the viewers. Already a number of questions have come in. This week we look at five topics. Pre-topic: We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video Question 1: In a number of places (http://learn.iis.net/page.aspx/201/32-bit-mode-worker-processes/, http://channel9.msdn.com/Events/MIX/MIX08/T06), I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic) Question 2: Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article (http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic) Question 3: Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic) Question 4: What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams) Here’s the link that was mentioned: http://technet.microsoft.com/en-us/library/ee692804.aspx This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Add a Scrollable Multi-Row Bookmarks Toolbar to Firefox

    - by Asian Angel
    If you keep a lot of bookmarks available in your Bookmarks Toolbar then you know that accessing some of them is not as easy as you would like. Now you can simplify the access process with the Multirow Bookmarks Toolbar for Firefox. Before As you can see it has not taken long to fill up our “Bookmarks Toolbar” and use of the drop-down list is required. If you do not keep too many bookmarks in the “Bookmarks Toolbar” then that may not be a bad thing but what if you have a very large number of bookmarks there? Multirow Bookmarks Toolbar in Action As soon as you have installed the extension and restarted Firefox you will see the default three rows display. If you are not worried about UI space then you are good to go. Those of you who like keeping the UI space to a minimum will want to have a look at this next part… You are not locked into a “three rows setup” with this extension. If you are ok with two rows then you can select for that in the “Options” and and enjoy a mini scrollbar on the right side. For our example we still had easy access to all three rows. Two rows still too much? Not a problem. Set the number of rows for one only in the “Options” and still enjoy that scrolling goodness. If you do select for one row only do not panic when you do not see a scrollbar…it is still there. Hold your mouse over where the scrollbar is shown in the image above and use your middle mouse button to scroll through the multiple rows. You can see the transition between the second and third rows on our browser here… Nice, huh? Options The “Options” are extremely easy to work with…just enable/disable the extension here and set the number of rows that you want visible. Conclusion While the Multirow Bookmarks Toolbar extension may not seem like much at first glance it does provide some nice flexibility for your “Bookmarks Toolbar”. You can save space and access your bookmarks easily without those drop-down lists. If you are looking for another great way to make the best use of the space available in your “Bookmarks Toolbar” then be sure to read our article on the Smart Bookmarks Bar extension for Firefox here. Links Download the Multirow Bookmarks Toolbar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Reduce Your Bookmarks Toolbar to a Toolbar ButtonConserve Space in Firefox by Combining ToolbarsAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorAdd a Vertical Bookmarks Toolbar to FirefoxCondense the Bookmarks in the Firefox Bookmarks Toolbar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • Now It’s Personal (Although It Should Always Be): Campus Recruitment

    - by user769227
    One of the things that I think is important and I want our Campus Recruitment Team here at Oracle to be known for is outstanding customer service. When I say customer service, I mean both students and hiring managers should feel they have had a great experience in our campus hiring process. I think one of the keys to providing outstanding customer service is being able to provide as best as we can a personalised experience where the students who are interviewing with us feel like individuals in our process and not just part a ‘campus drive’. In the campus world this can be challenging at times especially in countries where there is high volume hiring. It can be tricky to create a personal experience when you are hiring for a large number of open graduate roles at one time. I think Campus Recruitment is one of the areas in the recruitment industry that is just waiting for a change. We have all seen the proliferation of Social Media in Recruitment over the past 4-6 years. Every Recruiter has a LinkedIn account or uses Twitter or G+ or FB, etc… and some individuals and organisations do it really well. Even in Campus Hiring there is great Social Media initiatives where companies reach out to students and talk to them. However one thing that has not really changed (and this is a generalisation) is the campus hiring interview process. Do these words inspire enthusiasm to you: “Group Interview, Assessment Centre, On-Campus Drive, Off-Campus Drive, etc...” I don’t know about you but to me these words don’t really sound very personal or individual to students. It almost conjures up images of a factory production line or those long queues you see where the person behind the counter says ‘take a number’. Campus Recruitment has come a long way don’t get me wrong – companies can share data with and talk to students in so many different ways now it really has become a much more transparent and open process. There are some times such as at IIT’s in India where it really is a bit old school in terms of interviewing with students running from company to company interviewing on campus over the course of a few days but I want students talking to Oracle to have as great an experience as possible (the outcome of getting a job or not is separate to the customer experience). As students, what are your thoughts? Do you feel like ‘just a number’ when you are interviewing or is there ways that companies can make the process more personalised. Let us know your thoughts. If you are interviewing with Oracle and have questions, want to talk to us or want to know what it is like working here – email us and we will help where we can. If you can’t reach your local Recruiter in your region email me at [email protected] and I will put you in touch with the appropriate person.

    Read the article

  • C# coding standards” Use the const directive only on natural constants

    - by Nathan Wilfert
    I've seen these 2 guidelines in coding c# standard and I’m not sure the what the 2nd one means. With the exception of zero and one, never hard-code a numeric value; always declare a constant instead. Use the const directive only on natural constants such as the number of days of the week. 1st what is the definition of a natural constants and if the number is not a natural constants given the 1st rule how does one declare a constant in c# without the const directive? See http://www.scribd.com/doc/10731655/IDesign-C-Coding-Standard-232 for reference.

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • The Numerical ‘Magic’ of Cyclic Numbers

    - by Akemi Iwaya
    If you love crunching numbers or are just a fan of awesome number ‘tricks’ to impress your friends with, then you will definitely want to have a look at cyclic numbers. Dr Tony Padilla from the University of Nottingham shows how these awesome numbers work in Numberphile’s latest video. Cyclic Numbers – Numberphile [YouTube] Want to learn more about cyclic numbers? Then make sure to visit the Wikipedia page linked below! Cyclic number [Wikipedia]     

    Read the article

  • User Lockout & WLST

    - by Bala Kothandaraman
    WebLogic server provides an option to lockout users to protect accounts password guessing attack. It is implemented with a realm-wide Lockout Manager. This feature can be used with custom authentication provider also. But if you implement your own authentication provider and wish to implement your own lockout manager that is possible too. If your domain is configured to use the user lockout manager the following WLST script will help you to: - check whether a user is locked using a WLST script - find out the number of locked users in the realm #Define constants url='t3://localhost:7001' username='weblogic' password='weblogic' checkuser='test-deployer' #Connect connect(username,password,url) #Get Lockout Manager Runtime serverRuntime() dr = cmo.getServerSecurityRuntime().getDefaultRealmRuntime() ulmr = dr.getUserLockoutManagerRuntime() print '-------------------------------------------' #Check whether a user is locked if (ulmr.isLockedOut(checkuser) == 0): islocked = 'NOT locked' else: islocked = 'locked' print 'User ' + checkuser + ' is ' + islocked #Print number of locked users print 'No. of locked user - ', Integer(ulmr.getUserLockoutTotalCount()) print '-------------------------------------------' print '' #Disconnect & Exit disconnect() exit()

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Silverlight Cream for December 28, 2010 -- #1017

    - by Dave Campbell
    In this Issue: Davide Zordan, Alex Golesh, Michael S. Scherotter, Andrej Tozon, Alex Knight, Jeff Blankenburg(-2-), Jeremy Likness, and Laurent Bugnion. Above the Fold: Silverlight: "My “What’s new in Silverlight 4 demo” app" Andrej Tozon WP7: "Taking a screenshot from within a Silverlight #WP7 application" Laurent Bugnion Expression Blend: "PathListBox: getting started" Alex Knight Shoutouts: If you haven't seen this SurfCube app demo on YouTube yet... check it out now: SurfCube V1.0 Windows Phone 7 Browser Want to get a free WP7 class from Shawn Wildermuth? Check this out: Webinar: Writing your first Windows Phone 7 Application Koen Zwikstra announed the next preview of his great tool: Silverlight Spy Preview 2 From SilverlightCream.com: Using the Multi-Touch Behavior in a Windows Phone 7 Multi-Page application Davide Zordan has a post up responding to questions he receives about multi-touch on WP7 in applications spanning more than one page. Silverlight for Windows Phone 7 Quick Tip: Fix missing icons while using DatePicker/TimePicker controls Alex Golesh discusses the use of the DatePicker control from the WP7 toolkit and found an unpleasant surprise associated with the Done/Cancel icons in the ApplicationBar, and has a solution for us. Updated SMF Thumbnail Scrubbing Sample Code Michael S. Scherotter has a post up about an update he's done to Silverlight 4 of code that allows thumbnail views of a video while 'scrubbing' ... don't know what that is? read the post :) My “What’s new in Silverlight 4 demo” app Andrej Tozon admits he's a little behind with this post, but as he points out, it might be a good time to review Silverlight 4 features, on the eve of 5. PathListBox: getting started One half the Knight team -- Alex Knight this time, has the first post of a series on the PathListBox up ... some real Expression Blend goodness. What I Learned in WP7 – Issue #9 Two more from Jeff Blankenburg today, in his number 9, he starts off demonstrating passing data between pages when navigating and fnishes up with some excellent info for submitting apps to the marketplace. What I Learned in WP7 – #Issue 10 Jeff Blankenburg's number 10 elaborates on the query string data he discussed in number 9. Using Sterling in Windows Phone 7 Applications Who better than the author?? Jeremy Likness has an end-to-end WP7/Sterling app up on his blog... begin with downloading Sterling, discuss what's needed to support Tombstoning, even custom serialization. Taking a screenshot from within a Silverlight #WP7 application Laurent Bugnion has a post up describing something people have been looking for: getting a screenshot of a WP7 application's page. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Live Support Webinar for Oracle Primavera Customers

    - by karl.prutzer
    Hi all, Our Customer Support team is hosting another Live Support Webinar for Oracle Primavera customers scheduled for May 6, 2010 at 11am Eastern Time. The webinar covers the following topics. Best Practices when submitting an SR My Oracle Support Overview Support Resources - lifetime support policy, My Oracle Support Speed training resources, etc. Both the conference key for the web conference and the audio passcode for the call is... Primavera Audio Conference Details Toll Free dial in number = 1.877.808.5067 International Toll dial in number = 1.706.902.0289 Web conference link https://strtc.oracle.com/imtapp/app/sch_mtg_details.uix?mID=6761278

    Read the article

  • Platform Builder: Building Cloned Code from Multiple OS Versions

    - by Bruce Eitman
    My career goal is to delete more code than I write, and so far I have been fairly successful. But of course once in a while I need to clone code from the public tree which is contrary to my goal. Usually what follows is a new OS release. To help reach my goal, my team uses the same BSP code for multiple versions of the OS. That means that we need to handle the cloned code so that the correct code builds for the OS version that we are working on. To handle this we could use SKIPBUILD in the sources file, but that gets messy if the cloned code contains multiple folders. The solution that we use is to have a parent folder with subfolders that contain the OS version number. Example: PM |-PM500 |-PM600 |-PM700 The version number corresponds to the environment variable _WINCEOSVER. Then we have a simple DIRS file in the parent folder: DIRS=PM$( _WINCEOSVER) Which automatically selects the folder that goes with the OS version that we are building.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • Specifying a file name for the FTP and File based transports in OSB

    - by [email protected]
    A common question I receive is how to incorporate a variable value into a file name when using the FTP, SFTP, or File transports in Oracle Service Bus.  For example, if one of the fields in a message being put down to a file by the File transport is an order number variable, then how can you make the order number become part of the file name?  Another example might be if you want to specify the date in the file name.  The transport configuration wizard in OSB does not have an option to allow for this, other than allowing you to specify a static prefix of suffix variable.

    Read the article

  • Modelling photo-realistic grass in realtime

    - by sebf
    Hello, I see a number of tutorials on how to create good looking grasses when creating 3D renders but can't think how to model it for realtime/use in a game's scenery. Sure simple models with alpha cutouts can be used to create plants and trees in really awesome scenery but what about a lawn? Are there any good tricks to achieve this effect? I tried with a simple 4 sided box and a small texture and the number of objects needed for a decent appearance made Max crawl to a halt. (I am thinking it may be possible with a shader but that is a whole other area so thought I would just ask about anyones experience with modelling it here) Thanks!

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >