Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 17/390 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Visual Studio Shortcut: Surround With

    - by Jeff Widmer
    I learned a new Visual Studio keyboard shortcut today that is really awesome; the “Surround With” shortcut.  You can trigger the Surround With context menu by pressing the Ctrl-K, Ctrl-S key combination when on a line of code. Ctrl-K, Ctrl-S means to hold down the Control key and then press K and then while still holding down the Control key press S. Here is where this comes in handy: You type a line of code and then realize you need to put it within an if statement block. So you type “if” and hit tab twice to insert the if statement code snippet.  Then you highlight the previous line of code that you typed, and then either drag and drop it into the if-then block or cut and paste it.  That is not too bad but it is a lot of extra key clicks and mouse moves. Now try the same with the Surround With keyboard shortcut.  Just highlight that line of code that you just typed and press Ctrl-K, Ctrl-S and choose the if statement code snippet, hit tab, and POW!... you are done!  No more code moving/indenting required. Here is what the Surround With context menu looks like: Just up or down arrow inside the drop down list to the code snippet that you want to surround your currently selected text with.  Did I mention this is AWESOME! Now it is so simple to surround lines of code with an if-then block or a try-catch-finally block... things that usually took several key clicks and maybe one or two mouse moves. And this works in both Visual Studio 2008 and Visual Studio 2010 which means it has been around for a long time and I never knew about it.   Technorati Tags: Visual Studio Keyboard Shortcut

    Read the article

  • Free Windows Azure event next Monday in London (29th March)

    - by Eric Nelson
    I just heard that we still have spaces for this event happening next week (29th March 2010). Whilst the event is designed for start-ups, I’m sure nobody would notice if you snuck in :-) Just keep it to yourself ;-) Register using invitation code: 79F2AB. Hope to see you there. The agenda is looking pretty swish: 09:00 – 09:30 Registration 09:30 - 10:15 Keynote  ‘I’ve looked at clouds from both sides now....’– John Taysom, Active Seed Investor 10:15 - 10:45   The Microsoft Vision for Cloud Computing – Steve Clayton, Director Software + Services, EMEA 10:45 - 11:00   Break 11:00 - 12:30 “Windows Azure in Real World” – hear from startups that have built their business around the Azure platform, moderated by Alistair Beagley, Azure UK Developer and Platform Lead 12:30 - 13:15 Lunch and networking  13:15 - 14:15  Breakout Tracks, moderated by our Azure Experts 1. Windows Azure Technical Overview - David Gristwood, Application Architect, Microsoft 2. SQL Azure Technical Overview – Eric Nelson, Application Architect, Microsoft 3. Commercial insight into Windows Azure and what this means for BizSpark Start-ups - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 14:15 - 14:30 Session change over 14:30 - 15:30   Breakout Tracks, moderated by our Azure Experts 1. SQL Azure Technical Overview (repeat) - Eric Nelson, Application Architect, Microsoft 2. Deep dive into Windows Azure – Neil Kidd, Architect, Microsoft Technology Centre 3. Lessons Learnt - Windows Azure in the Real World interactive session – Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 15:30 - 16:00 Break & Session change over 16:00 - 17:00 Breakout Tracks, moderated by our Azure Experts 1. PHP / Ruby on Azure Simon Davies, Architect, UK Windows Azure Incubation Team, Microsoft 2. Commercial insight into Windows Azure and what this means for BizSpark Start-ups (repeat) - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 3. Lessons Learnt - Windows Azure in the Real World interactive session #2 Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 17:00 - 18:00 Pitches and Judging 18:15 Wrap-up and close 18:15 - 20:00 Drinks & Networking

    Read the article

  • My software is hosted on a "bad" website. Can I do anything about it?

    - by Abluescarab
    The software I've created is hosted on what you could call a "bad" website. It's hard to explain, so I'll just provide an example. I've made a free password generator. This, along with most of my other FREE software, is available on this website. This is their description of my software: Platform: 7/7 x64/Windows 2K/XP/2003/Vista Size: 61.6 Mb License: Trial File Type: .7z Last Updated: June 4th, 2011, 15:38 UTC Avarage Download Speed: 6226 Kb/s Last Week Downloads: 476 Toatal Downloads: 24908 Not only is the size completely skewed, it is not trial software, it's free software. The thing is that it's not the description I'm worried about--it's the download links. The website is a scam website. They apparently link to "cracks" and "keygens", but not only is that in itself illegal, they actually link to fake download websites that give you viruses and charge your credit card. Just to list things that are wrong with this website: they claim all software is paid software then offer downloads for keygens and cracks; they fake all details about the program and any program reviews and ratings; they and the downloads site they link to are probably run by the same person, so they make money off of these lies. I'm only a teenager with no means to pursue legal action. This means that, unfortunately, I can't do anything that will actually get results. I'd like my software to only be downloaded off my personal website. I have links to four legitimate locations to download my software and that's it. Essentially, is there anything I can do about this? As I said above, I can't pursue legal action, but is there some way I can discourage traffic to that website by blacklisting it or something? Can I make a claim on MY website to only download my software from the links I provide? Or should I just pay no mind? Because, honestly, it's a bit of a ways back in Google results. Thank you ahead of time.

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • Lucene and .NET Part I

    - by javarg
    I’ve playing around with Lucene.NET and trying to get a feeling of what was required to develop and implement a full business application using it. As you would imagine, many things are required for you to implement a robust solution for indexing content and searching it afterwards. Lucene is a great and robust solution for indexing content. It offers fast and performance enhanced search engine library available in Java and .NET. You will want to use this library in many particular scenarios: In Windows Azure, to support Full Text Search (a functionality not currently supported by SQL Azure) When storing files outside or not managed by your database (like in large document storage solutions that uses File System) When Full Text Search is not really what you need Lucene is more than a Full Text Search solution. It has several analyzers that let you process and search content in different ways (decomposing sentences, deriving words, removing articles, etc.). When deciding to implement indexing using Lucene, you will need to take into account the following: How content is to be indexed by Lucene and when. Using a service that runs after a specific interval Immediately when content changes When content is to available for searching / Availability of indexed content (as in real time content search) Immediately when content changes = near real time searching After a few minutes.. Ease of maintainability and development Some Technical Concerns.. When indexing content, indexes are locked for writing operations by the Index Writer. This means that Lucene is best designed to index content using single writer approach. When searching, Index Readers take a snapshot of indexes. This has the following implications: Setting up an index reader is a costly task. Your are not supposed to create one for each query or search. A good practice is to create readers and reuse them for several searches. The latter means that even when the content gets updated, you wont be able to see the changes. You will need to recycle the reader. In the second part of this post we will review some alternatives and design considerations.

    Read the article

  • C#.NET vs VB.NET, Which language is better?

    Features I cannot say any language good or bad as long as it's compiler can produce MSIL can run under .NET CLR. If someone says C# has more futures, you can understand that those new features are of C# compiler but not .NET, because if C# has a specific future then CLR cannot understand them. So the new features of C# will have to convert to the code understood by CLR eventually. that means the new features are developed for C# compiler basically to facilitates the developer to write their code in better way. so that means no difference in feature list between C# and VB.NET if you think in CLR perspective. Ease of writing Code I feel writing code in C# is easy, because my background is C and C++, Java, syntaxes very are similar. I assume most developers feel the same. Readability But some people say VB.NET code most readable for the members who are from non technical background, because keywords are generally in English rather special charectors. No of Projects in Market I assume 80 percent of market uses C# in their .NET development. for example in my company many projects are there .nET and all are using C#. Productivity & Experience though the feature list is same, generally developers wants to write code in their familiar languages. because it increase the productivity. Hope this helps to choose the language which suits for you. span.fullpost {display:none;}

    Read the article

  • links for 2010-05-20

    - by Bob Rhubart
    @pevansgreenwood: People don’t like change. (Or do they?) "Creating a culture that embraces change, means changing the way we think about and structure our organisations and our careers. It means rethinking the rules of enterprise IT." -- Peter Evans Greenwood (tags: enterprisearchitecture change innovation) Karim Berrah: After IRON MAN 2 "Nice demo of a robot serving a cup of coffee, from a Swiss based engineering company, NOSAKI, I visited last week. This movie is not a fiction (like IRON MAN 2) and is really powered by an Oracle Database." -- Karim Berrah (tags: oracle solaris ironman2 nosake) @myfear: Spring and Google vs. Java EE 6 "While Spring and Rod Johnson in particular have been extremely valuable in influencing the direction of Java (2)EE after the 1.4 release to the new, much more pragmatic world of Java EE 5, Spring has also caused polarization and fragmentation. Instead of helping forge the Java community together, it has sought to advanced its own cause." Oracle ACE Director Markus Eisele (tags: google javaee spring oracleace java) Arup Nanda: Mining Listener Logs Listener logs contain a wealth of information on security events. Oracle ACE Director Arup Nanda shows you how to create an external table to read the listener logs using simple SQL. (tags: otn oracle oracleace sql security)

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • Paper Gold Rush

    - by Chris G. Williams
    The last few days at the shop have been reminiscent of a marathon of Pawn Stars. Quite a few people have come in wanting to trade for store credit. Most of them have left disappointed. We did pick up a few things here and there (which hopefully I can sell.) The problem, in a nutshell, is that people get it in their head that a (YuGiOh) card is worth X amount because they looked it up 2-3 years a...go, or someone told them it was valuable... then they play it in their deck for a year without sleeves, and cram it in a binder covered in duct tape. By the time they bring the cards in to me, new sets have come out which often de-value the tournament usefulness of the card from $20 to *maybe* 50 cents, in mint condition. Which means I can offer them about 10-15 cents... only they are almost never in mint condition, which means I usually offer them nothing at all. Most of the time, you can watch their smile fade as I start going through their cards. It's kinda sad, really, since I know they think they've spent the last two years walking around with the keys to their own personal gold mine. I don't really enjoy seeing that look on a child's face. I like kids and I remember those moments when perception and reality crashed headlong into each other. It was seldom pretty. So, when I'm talking to a child, I try to take it easy on them and give them some suggestions on how to better preserve their cards. Sometimes though, it's an adult. Depending on the situation, my response to them varies pretty broadly. Most of the time though, I still feel pretty bad when it doesn't go their way.

    Read the article

  • How to obtain flow while pair programming in agile development?

    - by bizso09
    Flow is is concept introduced by Mihaly Csikszentmihalyi In short, it means what most to get into the "zone". You feel immeresed in the task you are doing, you are in deep focus and concentration and the task difficulty is just right for you, but challenging at the same time. When people acquire flow their prodctivity shoots up. Programming requires great deal of mental focus and programmers need to juggle several things in their mind at once. Many like to work in a quite environment where they can direct their full attention to the task. If they are interreupted, it may take several minutes, sometimes hours to get back into flow. I understand that agile way of doing software development is called pair prograaming. This is pormoted in Extreme programming too. It means you put the whole software development team in one room so that communication is seamless. You do programming with your pair because this way you get instant code reviews and fix bugs sooner. However, I alwys had problem obtaining flow while doing pair programming because of the contant stream of interrupts. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is all lost. How can you obtain and keep flow while doing agile pair programming?

    Read the article

  • Cost to licence characters or ships for a game

    - by Michael Jasper
    I am producing a game pitch document for a university game design class, and I am looking for examples of licencing cost for using characters or ships from other IP holders in a game. For example: cost of using an X-Wing in a game, licencing from Lucas cost of using the Enterprise in a game, licencing from Paramount cost of using the Space Shuttle (if any), licencing from Nasa EDIT The closest information I can find is from an article about Nights of the Old Republic, but isn't nearly specific enough for my needs: What Kotick means by Lucas being the principal beneficiary of the success of The Old Republic is that there are most likely clauses in the license agreement that give percentages, points, or another denomination of revenue out to Lucas and his people just for the Star Wars name, and that amount is presumed to be a great deal of money. Kotick is saying that because the cost of the license is so prohibitive, as he has personally had experience with in his position as CEO of Activision Blizzard, that EA will not be able to be profitable because of the hemorrhaging of money to the licensor. EDIT 2 Another vague source stating that FOX uses a "five-figure rule" (assuming between $10,000 - $99,000) It seems FOX, like most studios, will not license individuals to create new works based upon their products. They will only commission individuals of their choosing if they elect to branch out into expanded product lines related to those licenses. Alternately, they are open to making the licencing available to large corporations with access to global markets, but only if those corporations agree to what Ms Friedman called a "five-figure guarantee". Presumably this means that the corporation seeking the licensing must agree to pay a 5-figure sum for that license, and be confident that their product will sell enough volume to recoup that fee, and to produce sufficient profits to make the acquisition worth their while. Thank you!

    Read the article

  • LinkedIn Woopsie with the Outlook 2010 Social Media Connector

    - by Martin Hinshelwood
    I have always used the LinkedIn toolbar for Outlook to sort out, upload and sync my contacts. Because of this I have over 2000 contacts in my contacts list that I sync with my phone, Plaxo, live, Google and others. I got a surprise the other day when my LinkedIn account was suspended and I was unable to login.   Figure: Bad, account suspended   So I contacted LinkedIn customer services to find out what the problem is, and here is the response: Dear Martin, We have recently noticed a large number of page searches and profile views through your LinkedIn account. We are aware that you may be using an automated or manual process to systematically view LinkedIn web pages. The information within LinkedIn is provided by our users for usage on the site only. In order to protect user privacy, our User Agreement prohibits using: 1. Automated or manual means to view an excessively high number of profiles or mini-profiles. 2. Automated means to run searches to collect or store data obtained from our site. We have placed a restriction on your account until you agree to stop using these or similar methods to view pages on LinkedIn. We look forward to your reply to discuss this further. Sincerely, LinkedIn Privacy Team It looks like LinkedIn has suspended my account because of something that their component is doing! I do not know if this is an isolated case, or if it will happen more as more users get on Outlook 2010 and update to the new software, but watch out. Has anyone else been suspended who has installed the Office 2010 RTM and the LinkedIn Add-On? Technorati Tags: Fail,LinkedIn,Outlook 2010

    Read the article

  • Is rotating the lead developer a good or bad idea?

    - by NickC
    I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not, how do I recommend that we avoid this approach without seeming like it's merely for selfish reasons?

    Read the article

  • snmpd agent sends duplicate traps

    - by jsnmp
    I am on Ubuntu 10.04.4 LTS, and I cannot upgrade to a higher version. I have installed the snmpd agent (NET-SNMP version 5.4.2.1) with an apt-get install snmpd command. When an event occurs which sends a trap, two traps are sent for each such event instead of one. For example, when I shut down the agent with command /etc/init.d/snmpd stop, two shutdown traps are sent to the destination host. If I then start back up the agent with command /etc/init.d/snmpd start, then two cold start traps are sent to the destination host. Is this a known issue? Is there a fix for this, or is there a configuration change that is needed to prevent the sending of the duplicate trap? These are the contents of the /etc/snmp/snmpd.conf file: rocommunity public authtrapenable 1 trap2sink <trap destination hostname> public These are the contents of the /etc/default/snmpd file: # This file controls the activity of snmpd and snmptrapd # MIB directories. /usr/share/snmp/mibs is the default, but # including it here avoids some strange problems. export MIBDIRS=/usr/share/snmp/mibs # snmpd control (yes means start daemon). SNMPDRUN=yes # snmpd options (use syslog, close stdin/out/err). SNMPDOPTS='-Ls3d -Lf /dev/null -u snmp -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' # snmptrapd control (yes means start daemon). As of net-snmp version # 5.0, master agentx support must be enabled in snmpd before snmptrapd # can be run. See snmpd.conf(5) for how to do this. TRAPDRUN=no # snmptrapd options (use syslog). TRAPDOPTS='-Lsd -p /var/run/snmptrapd.pid' # create symlink on Debian legacy location to official RFC path SNMPDCOMPAT=yes

    Read the article

  • Why can't we capture the design of software more effectively?

    - by Ira Baxter
    As engineers, we all "design" artifacts (buildings, programs, circuits, molecules...). That's an activity (design-the-verb) that produces some kind of result (design-the-noun). I think we all agree that design-the-noun is a different entity than the artifact itself. A key activity in the software business (indeed, in any business where the resulting product artifact needs to be enhanced) is to understand the "design (the-noun)". Yet we seem, as a community, to be pretty much complete failures at recording it, as evidenced by the amount of effort people put into rediscovering facts about their code base. Ask somebody to show you the design of their code and see what you get. I think of a design for software as having: An explicit specification for what the software is supposed to do and how well it does it An explicit version of the code (this part is easy, everybody has it) An explanation for how each part of the code serves to achieve the specification A rationale as to why the code is the way it is (e.g., why a particualr choice rather than another) What is NOT a design is a particular perspective on the code. For example [not to pick specifically on] UML diagrams are not designs. Rather, they are properties you can derive from the code, or arguably, properties you wish you could derive from the code. But as a general rule, you can't derive the code from UML. Why is it that after 50+ years of building software, why don't we have regular ways to express this? My personal opinion is that we don't have good ways to express this. Even if we do, most of the community seems so focused on getting "code" that design-the-noun gets lost anyway. (IMHO, until design becomes the purpose of engineering, with the artifact extracted from the design, we're not going to get around this). What have you seen as means for recording designs (in the sense I have described it)? Explicit references to papers would be good. Why do you think specific and general means have not been succesful? How can we change this?

    Read the article

  • Is rotating the lead developer a good or bad idea?

    - by Renesis
    I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not, suppose I am likely the best candidate for lead developer — how do I recommend that we avoid this approach without looking like it's merely for selfish reasons? (In other words, the team is small enough that anyone recommending a single leader is likely to appear to be recommending themselves — especially those who have been part of the team longer.)

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Global vs. Local Monthly Searches in Adwords keyword tool

    - by Gregory
    I'm trying to learn how to use a keyword tool in Adwords. Here's what I entered: Country- Russia Language-Russian Desktop and laptop devices And the keyword was ???? ? ??????? (tours to Israel in Russian Cyrillic letters) . As a broad match type... Now... the results that I got were: Global monthly: 60,500 Local monthly: 40,500 If I got it right..."Global monthly" means in this context : worldwide average monthly searches for this search term in ANY language in any Google search site (google.ru, google.com.ua, google.com, google.fr etc.). It's all nice, BUT... Then I made an query for tours to Israel in English in the US...And I got: Global monthly: 60,500 Local monthly: 27,100 That doesn't make any sense to me though! How come the total sum (the global) is actually a smaller number than a combined sum of just TWO countries??? (27,100+40,500=67,60060,500) By "any language" they mean a translation of the term into ANY possible language???Or maybe by "language" Google means the language of searchers' operating system? or their browsers' language?

    Read the article

  • Aligning text to the bottom of a div: am I confused about CSS or about blueprint? [closed]

    - by larsks
    I've used Blueprint to prototype a very simple page layout...but after reading up on absolute vs. relative positioning and a number of online tutorials regarding vertical positioning, I'm not able to get things working the way I think they should. Here's my html: <div class="container" id="header> <div class="span-4" id="logo"> <img src="logo.png" width="150" height="194" /> </div> <div class="span-20 last" id="title"> <h1 class="big">TITLE</h1> </div> </div> The document does include the blueprint screen.css file. I want TITLE aligned with the bottom of the logo, which in practical terms means the bottom of #header. This was my first try: #header { position: relative; } #title { font-size: 36pt; position: absolute; bottom: 0; } Not unexpectedly, in retrospect, this puts TITLE flush left with the left edge of #header...but it failed to affect the vertical positioning of the title. So I got exactly the opposite of what I was looking for. So I tried this: #title { position: relative; } #title h1 { font-size: 36pt; position: absolute; bottom: 0; } My theory was that this would allign the h1 element with the bottom of the containing div element...but instead it made TITLE disappear, completely. I guess this means that it's rendering off the visible screen somewhere. At this point I'm baffled. I'm hoping someone here can point me in the right direction. Thanks!

    Read the article

  • How do I properly use String literals for loading content?

    - by Dave Voyles
    I've been using verbatim string literals for some time now, and never quite thought about using a regular string literal until I started to come across them in Microsoft provided XNA samples. With that said, I'm trying to implement a new AudioManager class from the Net Rumble sample. I have two (2) issues here: Question 1: In my code for my GameplayScreen screen I have a folder location written as the following, and it works fine: menuButton = content.Load<SoundEffect>(@"sfx/menuButton"); menuClose = content.Load<SoundEffect>(@"sfx/menuClose"); If you notice, you'll see that I'm using a verbatim string, with a forward slash "/". In the AudioManager class, they use a regular string literal, but with two backslashes "\". I understand that one is used as an escape, but why are they BACK instead of FORWARD? (See below) soundList[soundName] = game.Content.Load<SoundEffect>("audio\\wav\\"+ soundName); Question 2: I seem to be doing everything correctly in my AudioManager class, but I'm not sure of what this line means: audioFileList = audioFolder.GetFiles("*.xnb"); I suppose that the *xnb means look for everything BUT files that end in *xnb? I can't figure out what I'm doing wrong with my file locations, as the sound effects are not playing. My code is not much different from what I've linked to above. private AudioManager(Game game, DirectoryInfo audioDirectory) : base(game) { try { audioFolder = audioDirectory; audioFileList = audioFolder.GetFiles("*.mp3"); soundList = new Dictionary<string, SoundEffect>(); for (int i = 0; i < audioFileList.Length; i++) { string soundName = Path.GetFileNameWithoutExtension(audioFileList[i].Name); soundList[soundName] = game.Content.Load<SoundEffect>(@"sfx\" + soundName); soundList[soundName].Name = soundName; } // Plays this track while the GameplayScreen is active soundtrack = game.Content.Load<Song>("boomer"); } catch (NoAudioHardwareException) { // silently fall back to silence } }

    Read the article

  • Preview Chitika Premium Ads On Your Website Quickly

    - by Gopinath
    Google AdSense is an excellent option for publishers like us to monetize traffic. As Google AdSense allow only 3 ad units per page, we have good amount of space left empty on the blog. Why not we use this empty space to earn some revenue(make sure that you are not annoying your visitors with too many ads)? On Tech Dreams today we started experimenting with Chitika Premium Ads to displays advertisements for visitors landing on us through search engines. Chitika Premium Ads are displayed only to US visitors who finds our pages through search engines. Visitors from outside USA does not see these ads anywhere on our site. We being in India, how to preview the Chitika ads on our site? To preview Chitika ads add #chitikatest at the end of the url. For example to preview the ads on Tech Dreams I use the url http://techdreams.org/#chitikatest The above url displays default list of ads Chitika displays. But if you want to see preview of ads for a specific keyword you can append it at the end of the url. Here is another example http://www.techdreams.org/#chitikatest=ipad   Do You Know What The Word “Chitika” Means? What does Chitika mean? When Chitika co-founders, Venkat Kolluri and Alden DoRosario left Lycos in 2003 to start their own company, they sought a name that would suggest the speed with which its customers would be able to put up ads on their Web sites. Chitika, which means “snap of the fingers” in Telugu (a South Indian language), captured this sentiment and Chitika Inc. was born (via) This article titled,Preview Chitika Premium Ads On Your Website Quickly, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >