Search Results

Search found 36169 results on 1447 pages for 'boost program options'.

Page 28/1447 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How do I program a hardware button on my laptop?

    - by daniel11
    So I have a set of touch screen buttons on the top of my laptop's keyboard, (one turns on power save mode, one disables/enables the mouse touch pad, one toggles the wifi adapter, and two turns the volume up/down). However there's another one that opened a file backup utility that came with my laptop. Since I bought my laptop, I've uninstalled that utility, rendering this button useless. Would it be possible to reprogram that button to open another file or program on my computer? And if so, what are the requirements and how would I go about doing it? Thanks in advance!

    Read the article

  • Preview hidden options on web sites

    - by veili_13
    I want to have hidden options on my web site and be previewed only when the user chooses. I want to animate them from left and right (about 15%). At the moment I have them visible in my html but with width set at 0 and animate it at 15%. In these containers are many options which should disappear to the left/right (with slide effect). But I am not so sure that having all of this in my HTML so I am thinking to create them with jQuery, with 0 width and animating it to it's should be width and setting the other elements to 0 width. Is this approach good, or should I create them and hide the overflow so I would animate position. Or is there a better approach ??

    Read the article

  • How to Easily Add Custom Right-Click Options to Ubuntu’s File Manager

    - by Chris Hoffman
    Use Nautilus-Actions to easily and graphically create custom context menu options for Ubuntu’s Nautilus file manager. If you don’t want to create your own, you can install Nautilus-Actions-Extra to get a package of particularly useful user-created tools. Nautilus-Actions is simple to use – much simpler than editing the Windows registry to add Windows Explorer context menu options. All you really have to do is name your option and specify a command or script to run. HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • Magento Checkout options

    - by graham barnes
    Hi I want to add some options to my magento, lets say i print on clothing, a customer buys some t-shirts, shirts and jackets from me, it totals to £60+ VAT on the checkout area where i signup and not before I need to add an option where I can add a text box and upload option, can i do this? I ideally then want to add some pricing options if the user has chosen to add some branding to a product or multiple products e.g. if the branding was on the top right of the shirt it will cost £5.00, if on the back it costs £7.00 etc all if possible to be done via the admincp. I also want an option so when they upload their logo for the first time they are charged a one off charge, like a setup fee but If the customer has allready sent in there logo then no charge applies. thanks Graham

    Read the article

  • Magento Checkout options

    - by graham barnes
    Hi I want to add some options to my magento, lets say i print on clothing, a customer buys some t-shirts, shirts and jackets from me, it totals to £60+ VAT on the checkout area where i signup and not before I need to add an option where I can add a text box and upload option, can i do this? I ideally then want to add some pricing options if the user has chosen to add some branding to a product or multiple products e.g. if the branding was on the top right of the shirt it will cost £5.00, if on the back it costs £7.00 etc all if possible to be done via the admincp. I also want an option so when they upload their logo for the first time they are charged a one off charge, like a setup fee but If the customer has allready sent in there logo then no charge applies. thanks Graham

    Read the article

  • Joomla and Google Analytics advanced options in tracking code

    - by miako
    I want to insert google analytics tracking code in my joomla site. so i registered in the official site of google and saw there is an advanced tab with three more options than standard. Do i have to check "i want to track dynamic pages" and "i want to track php pages"? Do these options provide me better results or they are necessary for a dynamic site based on php like joomla? Does anyone know the process of installing? because i didn't manage to make it work by following this Also where do i place the tracking code? Because of some bugs some say it is better just after the tag <body> whereas other say just before the tag </body>. Thank you

    Read the article

  • Where did all of the options go? [closed]

    - by Devan
    In the last version of Ubuntu I used (10.04) I had the following options, which I can not find on 11.10: Set amount of time before screen dims Show percent instead of time in battery icon Change/add screensaver Change default applications (especially in the new launcher) Also, is there any way to make it so unity opens to more apps/all instead of the current default? Basically I just need to know if there's any way to customize Ubuntu that little bit more. Any help with any of the options above would be greatly appreciated! Note: I'm a noob and don't know much about command line, so gui apps would be preferred, if possible.

    Read the article

  • No options for sound output

    - by Chef Flambe
    Newbie here. Trying out Ubuntu for kicks. I have a 10.04 Ubuntu system dual booting on my Windows 7 laptop, fully updated, an Asus A53S. I have an external Samsung monitor/tv hooked up via HDMI. I'm getting sound from the external monitor fine when using Win7 but can't get anything with Ubuntu. I've read threads here about similar stuff and they say go check my sound options but I don't have any additional options than the one listed - Internal Audio. What else can I check/do?

    Read the article

  • are there compiler options in clang? [on hold]

    - by Deohboeh
    I am learning from The C++ Primer. One of the exercises is to compile a program with arguments in main(). For this I am trying to use mac terminal. I need to compile a C++11 Unix executable file named "main" which takes “f" as an argument. I am using Xcode 4.6.3 on OS X Lion. I compiled the program with clang++ -std=c++11 -stdlib=libc++ main.cpp -o main. But don’t know what to do next. I found -frecord-gcc-switches while searching compiler options on google. It does what I need to do. Is there a clang version of this? Please use simple language. I have never used command line before. I tried going through the clang user guide but a lot of it is out of my depth.

    Read the article

  • How to give an error when no options are given with optparse

    - by Acorn
    I'm try to work out how to use optparse, but I've come to a problem. My script (represented by this simplified example) takes a file, and does different things to it depending on options that are parsed to it. If no options are parsed nothing is done. It makes sense to me that because of this, an error should be given if no options are given by the user. I can't work out how to do this. Am I using options in the wrong way? If so, how should I be doing it instead? #!/usr/bin/python from optparse import OptionParser dict = {'name': foo, 'age': bar} parser = OptionParser() parser.add_option("-n", "--name", dest="name") parser.add_option("-a", "--age", dest="age") (options, args) = parser.parse_args() if options.name: dict['name'] = options.name if options.age: dict['age'] = options.age print dict #END

    Read the article

  • Merit and demerits for various Linux fiberchannel multipath options

    - by wzzrd
    On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in. The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc). Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too. UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

  • How can I get cmake to find my boost installation

    - by BD at Rivenhill
    I have installed the most recent version of boost in /usr/local (with includes in /usr/local/boost and libraries in /usr/local/lib/boost) and I am now attempting to install Wt from source, but cmake (version 2.6) can't seem to find the boost installation. It tries to give helpful suggestions about setting BOOST_DIR and Boost_LIBRARYDIR, but I haven't been able to get it to work by tweaking these variables. The most recent error message that I get is that it can't find the libraries, but it seems to indicate that it is using "/usr/local/include" for the include path, which isn't correct (and I can't seem to fix it). Does anybody have a solution for this off the top of their head, or do I need to go mucking around inside cmake to figure it out?

    Read the article

  • Lucene boost: I need to make it work better

    - by zvikico
    I'm using Lucene to index components with names and types. Some components are more important, thus, get a bigger boost. However, I cannot get my boost to work properly. I sill get some components appear later (get worse score), even though they have a higher boost. Note that the indexing is done on one field only and I've set the boost to that field alone. I'm using Lucene in Java. I don't think it has anything to do with the field length. I've seen components with the same name (but different type) get the wrong score.

    Read the article

  • LNK2005: delete already defined error in VC++

    - by user333422
    Hi, I am newbie to VC++, so please forgive me if this is stupid. I have got 7 solutions under one project. Six of these build static libraries which is linked inside 7th to produce an exe. The runtime configuration of all the projects is MultiThreaded Debug. sln which is used to produce exe is using MFC other slns use standard runtiem library. I tried changing those to MFC, but still I am getting the same error. All the six slns build successfully. When I try to build exe - get the follwoing error: nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj) This is weird because tara_common is one of the libs that I generated and fileStream.cpp is file that is just using delete on a pointer. I built it in verbose mod, so I am attaching the output. ENVIRONMENT SPACE _ACP_ATLPROV=C:\Program Files\Microsoft Visual Studio 9.0\VC\Bin\ATLProv.dll _ACP_INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include _ACP_LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib _ACP_PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 ALLUSERSPROFILE=C:\Documents and Settings\All Users CLIENTNAME=Console CommonProgramFiles=C:\Program Files\Common Files ComSpec=C:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK=NO HOMEDRIVE=C: INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib LIBPATH=c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib LOGONSERVER=\xxx NUMBER_OF_PROCESSORS=1 OS=Windows_NT PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH PROCESSOR_ARCHITECTURE=x86 PROCESSOR_IDENTIFIER=x86 Family 6 Model 23 Stepping 10, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=170a ProgramFiles=C:\Program Files SESSIONNAME=Console SystemDrive=C: SystemRoot=C:\WINDOWS VisualStudioDir=C:\Documents and Settings\sgupta\My Documents\Visual Studio 2008 VS90COMNTOOLS=C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\ WecVersionForRosebud.710=2 windir=C:\WINDOWS COMMAND LINES: Creating temporary file "c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp" with contents [ /VERBOSE /OUT:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.exe" /INCREMENTAL /LIBPATH:"......\3rdparty\boost_1_42_0\stage\lib" /MANIFEST /MANIFESTFILE:"Debug\IvoDB_1_35_Debug.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DELAYLOAD:"OleAcc.dll" /DEBUG /PDB:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.pdb" /SUBSYSTEM:WINDOWS /DYNAMICBASE /NXCOMPAT /MACHINE:X86 ......\tara\database\build\Debug\tara_database.lib ......\tara\common\build\Debug\tara_common.lib ......\3rdparty\sqliteWrapper\Debug\sqliteWrapper.lib ......\3rdparty\sqlite_3_6_18\Debug\sqlite.lib ......\stsdk\modules\win32\Debug\modules.lib ......\stsdk\axeapi\win32\Debug\axeapi.lib DelayImp.lib ".\Debug\AntennaSettings.obj" ".\Debug\AudioVideoSettings.obj" ".\Debug\CMDatabase.obj" ".\Debug\CMSettings.obj" ".\Debug\ColorFileDialog.obj" ".\Debug\ColorStatic.obj" ".\Debug\DragDropListCtrl.obj" ".\Debug\DragDropTreeCtrl.obj" ".\Debug\FavouriteEdit.obj" ".\Debug\FavTab.obj" ".\Debug\FindProgram.obj" ".\Debug\HyperLink.obj" ".\Debug\IvoDB.obj" ".\Debug\IvoDBDlg.obj" ".\Debug\IvoDBInfo.obj" ".\Debug\IvoDBInfoTab.obj" ".\Debug\IvoDbStruct.obj" ".\Debug\LayoutHelper.obj" ".\Debug\MainTab.obj" ".\Debug\OperTabCtrl.obj" ".\Debug\ParentalLock.obj" ".\Debug\ProgramEdit.obj" ".\Debug\ProgramTab.obj" ".\Debug\PVRSettings.obj" ".\Debug\SatTab.obj" ".\Debug\SettingsBase.obj" ".\Debug\SettingsTab.obj" ".\Debug\STBSettings.obj" ".\Debug\stdafx.obj" ".\Debug\TimeDate.obj" ".\Debug\TransponderEdit.obj" ".\Debug\TreeTab.obj" ".\Debug\UserPreferences.obj" ".\Debug\Xmodem.obj" ".\Debug\IvoDB.res" ".\Debug\IvoDB_1_35_Debug.exe.embed.manifest.res" ] Creating command line "link.exe @c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp /NOLOGO /ERRORREPORT:PROMPT" **Processed /DEFAULTLIB:atlsd.lib Processed /DEFAULTLIB:ws2_32.lib Processed /DEFAULTLIB:mswsock.lib Processed /DISALLOWLIB:mfc90d.lib Processed /DISALLOWLIB:mfcs90d.lib Processed /DISALLOWLIB:mfc90.lib Processed /DISALLOWLIB:mfcs90.lib Processed /DISALLOWLIB:mfc90ud.lib Processed /DISALLOWLIB:mfcs90ud.lib Processed /DISALLOWLIB:mfc90u.lib Processed /DISALLOWLIB:mfcs90u.lib Processed /DISALLOWLIB:uafxcwd.lib Processed /DISALLOWLIB:uafxcw.lib Processed /DISALLOWLIB:nafxcw.lib Found "void __cdecl operator delete(void *)" (??3@YAXPAX@Z) Referenced in axeapi.lib(ipcgeneric.obj) Referenced in axeapi.lib(ipccommon.obj) Referenced in axeapi.lib(activeobject.obj) Referenced in nafxcwd.lib(nolib.obj) Referenced in sqliteWrapper.lib(DbConnection.obj) Referenced in sqliteWrapper.lib(Statement.obj) Referenced in axeapi.lib(nvstorage.obj) Referenced in axeapi.lib(avctrler.obj) Referenced in tara_common.lib(trace.obj) Referenced in tara_common.lib(ssPrintf.obj) Referenced in tara_common.lib(taraConfig.obj) Referenced in tara_common.lib(stream.obj) Referenced in tara_common.lib(STBConfigurationStorage.obj) Referenced in tara_common.lib(STBConfiguration.obj) Referenced in tara_common.lib(configParser.obj) Referenced in tara_common.lib(fileStream.obj) Referenced in tara_database.lib(SatStream.obj) Referenced in tara_database.lib(Service.obj) Referenced in tara_database.lib(ServiceList.obj) Referenced in tara_common.lib(playerConfig.obj) Referenced in UserPreferences.obj Referenced in Xmodem.obj Referenced in tara_database.lib(init.obj) Referenced in tara_database.lib(Satellite.obj) Referenced in TransponderEdit.obj Referenced in TreeTab.obj Referenced in TreeTab.obj Referenced in UserPreferences.obj Referenced in stdafx.obj Referenced in TimeDate.obj Referenced in TimeDate.obj Referenced in TransponderEdit.obj Referenced in SettingsTab.obj Referenced in SettingsTab.obj Referenced in STBSettings.obj Referenced in STBSettings.obj Referenced in SatTab.obj Referenced in SatTab.obj Referenced in SettingsBase.obj Referenced in SettingsBase.obj Referenced in ProgramTab.obj Referenced in ProgramTab.obj Referenced in PVRSettings.obj Referenced in PVRSettings.obj Referenced in ParentalLock.obj Referenced in ParentalLock.obj Referenced in ProgramEdit.obj Referenced in ProgramEdit.obj Referenced in MainTab.obj Referenced in MainTab.obj Referenced in OperTabCtrl.obj Referenced in OperTabCtrl.obj Referenced in IvoDBInfoTab.obj Referenced in IvoDbStruct.obj Referenced in LayoutHelper.obj Referenced in LayoutHelper.obj Referenced in IvoDBDlg.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfoTab.obj Referenced in HyperLink.obj Referenced in IvoDB.obj Referenced in IvoDB.obj Referenced in IvoDBDlg.obj Referenced in FavTab.obj Referenced in FavTab.obj Referenced in FindProgram.obj Referenced in FindProgram.obj Referenced in DragDropTreeCtrl.obj Referenced in DragDropTreeCtrl.obj Referenced in FavouriteEdit.obj Referenced in FavouriteEdit.obj Referenced in ColorFileDialog.obj Referenced in ColorStatic.obj Referenced in DragDropListCtrl.obj Referenced in DragDropListCtrl.obj Referenced in CMDatabase.obj Referenced in CMDatabase.obj Referenced in CMSettings.obj Referenced in CMSettings.obj Referenced in AntennaSettings.obj Referenced in AntennaSettings.obj Referenced in AudioVideoSettings.obj Referenced in AudioVideoSettings.obj Loaded nafxcwd.lib(afxmem.obj) nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj)** Please help me out, I have already wasted 3 days looking on net and trying all the possible solutions I found. thanks in advance, SG

    Read the article

  • Adding Volcanos and Options - Earthquake Locator, part 2

    - by Bobby Diaz
    Since volcanos are often associated with earthquakes, and vice versa, I decided to show recent volcanic activity on the Earthquake Locator map.  I am pulling the data from a website created for a joint project between the Smithsonian's Global Volcanism Program and the US Geological Survey's Volcano Hazards Program, found here.  They provide a Weekly Volcanic Activity Report as an RSS feed.   I started implementing this new functionality by creating a new Volcano entity in the domain model and adding the following to the EarthquakeService class (I also factored out the common reading/parsing helper methods to a separate FeedReader class that can be used by multiple domain service classes):           private static readonly string VolcanoFeedUrl =             ConfigurationManager.AppSettings["VolcanoFeedUrl"];           /// <summary>         /// Gets the volcano data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Volcano"/> objects.</returns>         public IQueryable<Volcano> GetVolcanos()         {             var feed = FeedReader.Load(VolcanoFeedUrl);             var list = new List<Volcano>();               if ( feed != null )             {                 foreach ( var item in feed.Items )                 {                     var quake = CreateVolcano(item);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates a <see cref="Volcano"/> object for each item in the RSS feed.         /// </summary>         /// <param name="item">The RSS item.</param>         /// <returns></returns>         private Volcano CreateVolcano(SyndicationItem item)         {             Volcano volcano = null;             string title = item.Title.Text;             string desc = item.Summary.Text;             double? latitude = null;             double? longitude = null;               FeedReader.GetGeoRssPoint(item, out latitude, out longitude);               if ( !String.IsNullOrEmpty(title) )             {                 title = title.Substring(0, title.IndexOf('-'));             }             if ( !String.IsNullOrEmpty(desc) )             {                 desc = String.Join("\n\n", desc                         .Replace("<p>", "")                         .Split(                             new string[] { "</p>" },                             StringSplitOptions.RemoveEmptyEntries)                         .Select(s => s.Trim())                         .ToArray())                         .Trim();             }               if ( latitude != null && longitude != null )             {                 volcano = new Volcano()                 {                     Id = item.Id,                     Title = title,                     Description = desc,                     Url = item.Links.Select(l => l.Uri.OriginalString).FirstOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault()                 };             }               return volcano;         } I then added the corresponding LoadVolcanos() method and Volcanos collection to the EarthquakeViewModel class in much the same way I did with the Earthquakes in my previous article in this series. Now that I am starting to add more information to the map, I wanted to give the user some options as to what is displayed and allowing them to choose what gets turned off.  I have updated the MainPage.xaml to look like this:   <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:basic="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>           <DataTemplate x:Key="VolcanoTemplate">             <Polygon Fill="Gold" Stroke="Black" StrokeThickness="1" Points="0,10 5,0 10,10"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center"                      MouseLeftButtonUp="Volcano_MouseLeftButtonUp">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="Click icon for more information..." />                     </StackPanel>                 </ToolTipService.ToolTip>             </Polygon>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">               <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />               <bing:MapItemsControl ItemsSource="{Binding Volcanos}"                                   ItemTemplate="{StaticResource VolcanoTemplate}" />         </bing:Map>           <basic:TabControl x:Name="tabs" VerticalAlignment="Bottom" MaxHeight="25" Opacity="0.7">             <basic:TabItem Margin="90,0,-90,0" MouseLeftButtonUp="TabItem_MouseLeftButtonUp">                 <basic:TabItem.Header>                     <TextBlock x:Name="txtHeader" Text="Options"                                FontSize="13" FontWeight="Bold" />                 </basic:TabItem.Header>                   <StackPanel Orientation="Horizontal">                     <TextBlock Text="Earthquakes:" FontWeight="Bold" Margin="3" />                     <StackPanel Margin="3">                         <CheckBox Content=" &lt; 4.0"                                   IsChecked="{Binding ShowLt4, Mode=TwoWay}" />                         <CheckBox Content="4.0 - 4.9"                                   IsChecked="{Binding Show4s, Mode=TwoWay}" />                         <CheckBox Content="5.0 - 5.9"                                   IsChecked="{Binding Show5s, Mode=TwoWay}" />                     </StackPanel>                       <StackPanel Margin="10,3,3,3">                         <CheckBox Content="6.0 - 6.9"                                   IsChecked="{Binding Show6s, Mode=TwoWay}" />                         <CheckBox Content="7.0 - 7.9"                                   IsChecked="{Binding Show7s, Mode=TwoWay}" />                         <CheckBox Content="8.0 +"                                   IsChecked="{Binding ShowGe8, Mode=TwoWay}" />                     </StackPanel>                       <TextBlock Text="Other:" FontWeight="Bold" Margin="50,3,3,3" />                     <StackPanel Margin="3">                         <CheckBox Content="Volcanos"                                   IsChecked="{Binding ShowVolcanos, Mode=TwoWay}" />                     </StackPanel>                 </StackPanel>               </basic:TabItem>         </basic:TabControl>       </Grid> </UserControl> Notice that I added a VolcanoTemplate that uses a triangle-shaped Polygon to represent the Volcano locations, and I also added a second <bing:MapItemsControl /> tag to the map to bind to the Volcanos collection.  The TabControl found below the map houses the options panel that will present the user with several checkboxes so they can filter the different points based on type and other properties (i.e. Magnitude).  Initially, the TabItem is collapsed to reduce it's footprint, but the screen shot below shows the options panel expanded to reveal the available settings:     I have updated the Source Code and Live Demo to include these new features.   Happy Mapping!

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine Knows the Best – Part 2

    - by pinaldave
    This blog post is part 2 of the earlier written article SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best by Paulo R. Pereira. Paulo has left excellent comment to earlier article once again proving the point that SQL Server Engine is smart enough to figure out the best plan itself and uses the same for the query. Let us go over his comment as he has posted. “I think IN or EXISTS is the best choice, because there is a little difference between ‘Merge Join’ of query with JOIN (Inner Join) and the others options (Left Semi Join), and JOIN can give more results than IN or EXISTS if the relationship is 1:0..N and not 1:0..1. And if I try use NOT IN and NOT EXISTS the query plan is different from LEFT JOIN too (Left Anti Semi Join vs. Left Outer Join + Filter). So, I found a case where EXISTS has a different query plan than IN or ANY/SOME:” USE AdventureWorks GO -- use of SOME SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = SOME ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of IN SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of EXISTS SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) When looked into execution plan of the queries listed above indeed we do get different plans for queries and SQL Server Engines creates the best (least cost) plan for each query. Click on image to see larger images. Thanks Paulo for your wonderful contribution. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Oracle WebCenter Partner Program

    - by kellsey.ruppel
    In competitive marketplaces, your company needs to quickly respond to changes and new trends, in order to open opportunities and build long-term growth. Oracle has a variety of next-generation services, solutions and resources that will leverage the differentiators in your offerings. Name your partnering needs: Oracle has the answer. This week we’d like to focus on Partners and the value your organization can gain from working with the Oracle PartnerNetwork. The Oracle PartnerNetwork will empower your company with exceptional resources to distinguish your offerings from the competition, seize opportunities, and increase your sales. We’re happy to welcome Christine Kungl, and Brian Buzzell, from Oracle’s World Wide Alliances & Channels (WWA&C) WebCenter Partner Enablement team, as today’s guests on the Oracle WebCenter blog. Q: What is the Oracle PartnerNetwork (OPN)?A: Christine: Oracle’s PartnerNetwork (OPN) is a collaborative partnership which allows registered companies specific added value resources to help differentiate themselves from their competition. Through OPN programs it provides companies the ability to seize and target opportunities, educate and train their teams, and leverage unparalleled opportunity given Oracle’s large market footprint. OPN’s multi-level programs are targeted at different levels allowing companies to grow and evolve with Oracle based on their business needs.  As part of their OPN memberships partners are encouraged to become OPN Specialized allowing those partners additional differentiation in Oracle’s Partner Network Community.  Q: What is an OPN Specialization and what resources are available for Specialized Partners?A: Brian: Oracle wanted a better way for our partners to differentiate their special skills and expertise, as well a more effective way to communicate that difference to customers.  Oracle’s expanding product portfolio demanded that we be able to identify partners with significant product knowledge—those who had made an investment in Oracle and a continuing commitment to deliver Oracle solutions. And with more than 30,000 Oracle partners around the world, Oracle needed a way for our customers to choose the right partner for their business. So how did Oracle meet this need? With the new partner program:  Oracle PartnerNetwork (OPN) Specialized. In this new program, Oracle partners are: Specialized :  Differentiating themselves from the competition with expertise that set them apart Recognized:  Being acknowledged for investing in becoming Oracle experts in specialized areas. Preferred :  Connecting with potential customers who are seeking  value-added solutions for their business OPN Specialized provides all partners with educational opportunities, training, and tools specially designed to build competency and grow business.  Partners can serve their customers better through key resources:OPN Specialized Knowledge Zones – Located on the updated and enhanced OPN portal— provide a single point of entry for all education and training information for Oracle partners. Enablement 2.0 Resources —Enablement 2.0 helps Oracle partners build their competencies and skills through a variety of educational opportunities and expanded training choices. These resources include: Enablement 2.0 “Boot camps” provide three-tiered learning levels that help jump-start partner training The role-based training covers Oracle’s application and technology products and offers a combination of classroom lectures, hands-on lab exercises, and case studies. Enablement 2.0 Interactive guided learning paths (GLPs) with recommendations on how to achieve specialization Upgraded partner solution kits Enhanced, specialized business centers available 24/7 around the globe on the OPN portal OPN Competency Center—Tracking ProgressThe OPN Competency Center keeps track as a partner applies for and achieves specialization in selected areas. You start with an assessment that compares your organization’s current skills and experience with the requirements for specialization in the area you have chosen. The OPN Competency Center then provides a roadmap that itemizes the skills and the knowledge you need to earn specialized status. In summary, OPN Specialization not only includes key training resources but a way to track and show progression for your partner organization. Q: What is are the OPN Membership Levels and what are the benefits?A:  Christine: The base OPN membership levels are: Remarketer: At the Remarketer level, retailers can choose to resell select Oracle products with the backing of authorized, regionally located, value-added distributors (VADs). The Remarketer level has no fees and no partner agreement with Oracle, but does offer online training and sales tools through the OPN portal.Program Details: RemarketerSilver Level: The Silver level is for Oracle partners who are focused on reselling and developing business with products ordered through the Oracle 1-Click Ordering Program. The Silver level provides a cost-effective, yet scalable way for partners to start an OPN Specialized membership and offers a substantial set of benefits that lets partners increase their competitive positioning. Program Details: SilverGold Level: Gold-level partners have the ability to specialize, helping them grow their business and create differentiation in the marketplace. Oracle partners at the Gold level can develop, sell, or implement the full stack of Oracle solutions and can apply to resell Oracle Applications.Program Details: GoldPlatinum Level: The Platinum level is for Oracle partners who want the highest level of benefits and are committed to reaching a minimum of five specializations. Platinum partners are recognized for their expertise in a broad range of products and technology, and receive dedicated support from Oracle.Program Details: PlatinumIn addition we recently introduced a new level:Diamond Level: This level is the most prestigious level of OPN Specialized. It allows companies to differentiate further because of their focused depth and breadth of their expertise. Program Details: DiamondSo as you can see there are various levels cost effective ways that Partners can get assistance, differentiation through OPN membership. Q: What role does the Oracle's World Wide Alliances & Channels (WWA&C), Partner Enablement teams and the WebCenter Community play?  A: Brian: Oracle’s WWA&C teams are responsible for manage relationships, educating their teams, creating go-to-market solutions and fostering communities for Oracle partners worldwide.  The WebCenter Partner Enablement Middleware Team is tasked to create, manage and distribute Specialization resources for the WebCenter Partner community. Q: What WebCenter Specializations are currently available?A: Christine:  As of now here are the following WebCenter Specializations and their availability: Oracle WebCenter Portal Specialization (Oracle WebCenter Portal): Available NowThe Oracle WebCenter Specialization provides insight into the following products: WebCenter Services, WebCenter Spaces, and WebLogic Portal.Oracle WebCenter Specialized Partners can efficiently use Oracle WebCenter products to create social applications, enterprise portals, communities, composite applications, and Internet or intranet Web sites on a standards-based, service-oriented architecture (SOA). The suite combines the development of rich internet applications; a multi-channel portal framework; and a suite of horizontal WebCenter applications, which provide content, presence, and social networking capabilities to create a highly interactive user experience. Oracle WebCenter Content Specialization: Available NowThe Oracle WebCenter Content Specialization provides insight into the following products; Universal Content Management, WebCenter Records Management, WebCenter Imaging, WebCenter Distributed Capture, and WebCenter Capture.Oracle WebCenter Content Specialized Partners can efficiently build content-rich business applications, reuse content, and integrate hundreds of content services with other business applications. This allows our customers to decrease costs, automate processes, reduce resource bottlenecks, share content effectively, minimize the number of lost documents, and better manage risk. Oracle WebCenter Sites Specialization: Available Q1 2012Oracle WebCenter Sites is part of the broader Oracle WebCenter platform that provides organizations with a complete customer experience management solution.  Partners that align with the new Oracle WebCenter Sites platform allow their customers organizations to: Leverage customer information from all channels and systems Manage interactions across all channels Unify commerce, merchandising, marketing, and service across all channels Provide personalized, choreographed consumer journeys across all channels Integrate order orchestration, supply chain management and order fulfillment Q: What criteria does the Partner organization need to achieve Specialization? What about individual Sales, PreSales & Implementation Specialist/Technical consultants?A: Brian: Each Oracle WebCenter Specialization has unique Business Criteria that must be met in order to achieve that Specialization.  This includes a unique number of transactions (co-sell, re-sell, and referral), customer references and then unique number of specialists as part of a partner team (Sales, Pre-Sales, Implementation, and Support).   Each WebCenter Specialization provides training resources (GLPs, BootCamps, Assessments and Exams for individuals on a partner’s staff to fulfill those requirements.  That criterion can be found for each Specialization on the Specialize tab for each WebCenter Knowledge Zone.  Here are the sample criteria, recommended courses, exams for the WebCenter Portal Specialization: WebCenter Portal Specialization Criteria Q: Do you have any suggestions on the best way for partners to get started if they would like to know more?A: Christine:   The best way to start is for partners is look at their business and core Oracle team focus and then look to become specialized in one or more areas.  Once you have selected the Specializations that are right for your business, you need to follow the first 3 key steps described below. The fourth step outlines the additional process to follow if you meet the criteria to be Advanced Specialized. Note that Step 4 may not be done without first following Steps 1-3.1. Join the Knowledge Zone(s) where you want to achieve Specialized status Go to the Knowledge Zone lick on the "Why Partner" tab Click on the "Join Knowledge Zone" link 2. Meet the Specialization criteria - Define and implement plans in your organization to achieve the competency and business criteria targets of the Specialization. (Note: Worldwide OPN members at the Gold, Platinum, or Diamond level and their Associates at the Gold, Platinum, or Diamond level may count their collective resources to meet the business and competency criteria required for specialization in this area.) 3. Apply for Specialization – when you have met the business and competency criteria required, inform Oracle by completing the following steps: Click on the "Specialize" tab in the Knowledge Zone Click on the "Apply Now" button Complete the online application form Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Specialized status. Need more information? Access our Step by Step Guide (PDF) 4. Apply for Advanced Specialization (Optional) – If your company has on staff 50 unique Certified Implementation Specialists in your company's approved Specialization's product set, let Oracle know by following these steps: Ensure that you have 50 or more unique individuals that are Certified Implementation Specialists in the specific Specialization awarded to your company If you are pooling resources from another Associate or Worldwide entity, ensure you know that company’s name and country Have your Oracle PRM Administrator complete the online Advanced Specialization Application Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Advanced Specialized status. There are additional resources on OPN as well as the broader WebCenter Community: v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >