Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 32/52 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • Using jQuery to Make a Field Read Only in SharePoint

    - by Mark Rackley
    Okay… this will be my shortest blog post EVER. Very little rambling.. I promise, and I’m sure this has been blogged more than once, so I apologize for adding to the noise, but like I always say, I blog for myself so I have a global bookmark. So,let’s say you have a field on a SharePoint Form and you want to make it read only. You COULD just open it up in SPD and easily make it read only, but some people are purists and don’t like use SPD or modify the default new/edit/disp forms. Put me in the latter camp, I try to avoid modifying these forms and it seemed like such a simple task that I didn’t want to create a new un-ghosted form.  So.. how do you do it? It’s only one line of jQuery. All you need to do is find the id for your input field and capture the keypress on it so that it cannot be modified (you should probably capture clicks for dropdowns/checkboxes/etc. but I didn’t need to).  Anyway, here’s the entire script: <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(document).ready(function($){ //capture keypress on our read only field and return false $('#idOfInputField').keypress(function() { return false; }); }) </script>   You can find the ID of your input field by viewing the source, this ID stays consistent as long as you don’t muck with the list or form in the wrong way.  Please note, you CANNOT disable the input field as an alternative to capturing the keypress. If you do this and save the form, any data in the disabled fields will be wiped out. There are probably a dozen ways to make a field read-only and if all you are using jQuery for is to make a field read-only, then you might want to question your use of adding the overhead (although it’s really not that much). Hey.. it’s another tool for your tool belt.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • How to apply effects that occur (or change) over time to characters in a game?

    - by Joshua Harris
    So assume that I have a system that applies Effects to Characters like so: public class Character { private Collection<Effect> _effects; public void AddEffect (Effect e) { e.ApplyTo(this); _effects.Add(e); } public void RemoveEffect (Effect e) { e.RemoveFrom(this); _effects.Remove(e); } } public interface Effect { public void ApplyTo (Character character); public void RemoveFrom (Character character); } Example Effect: Armor Buff for 5 seconds. void someFunction() { // Do Stuff ... Timer armorTimer = new Timer(5 seconds); ArmorBuff armorbuff = new ArmorBuff(); character.AddEffect(armorBuff); armorTimer.Start(); // Do more stuff ... } // Some where else in code public void ArmorTimer_Complete() { character.RemoveEffect(armorBuff); } public class ArmorBuff implements Effect { public void applyTo(Character character) { character.changeArmor(20); } public void removeFrom(Character character) { character.changeArmor(-20); } } Ok, so this example would buff the Characters armor for 5 seconds. Easy to get working. But what about effects that change over the duration of the effect being applied. Two examples come to mind: Damage Over Time: 200 damage every second for 3 seconds. I could mimic this by applying an Effect that lasts for 1 second and has a counter set to 3, then when it is removed it could deal 200 damage, clone itself, decrement the counter of the clone, and apply the clone to the character. If it repeats this until the counter is 0, then you got a damage over time ability. I'm not a huge fan of this approach, but it does describe the behavior exactly. Degenerating Speed Boost: Gain a speed boost that degrades over 3 seconds until you return to your normal speed. This is a bit harder. I can basically do the same thing as above except having timers set to some portion of a second, such that they occur fast enough to give the appearance of degenerating smoothly over time (even though they are really just stepping down incrementally). I feel like you could get away with only 12 steps over a second (maybe less, I would have to test it and see), but this doesn't seem very elegant to me. The only other way to implement this effect would be to change the system so that the Character checks the _effects collection for effects that alter any of the properties any time that they are being used. I could handle this in functions like getCurrentSpeed() and getCurrentArmor(), but you can imagine how much of a hassle it would be to have that kind of overhead every time you want to do a calculation with movement speed (which would be every time you move your character). Is there a better way to deal with these kinds of effects or events?

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • How best to look up objects by label?

    - by dsollen
    I am writing the server backed by a pre-written API. I'm going to get a number of strings representing ports, signals, paths, etc etc etc. I need to look up the object associated with a given label, these objects are all in memory (no sql magic to do this for me). My question is, how best do I associate a given unique label with the mutable object it represents? I have enough objects that looking through every signal or every port to find the one that matches is possible, but may be slightly too slow. To be honest the direct 'look at every object' method is probably good enough for so small a body of objects and anything else is premature optimization, but I still am curious what the proper solution would be if I thought my signals were going to grow a bit larger. As I see it there are two options available. First would be to to create a 'store' that is a simple map between object and label. I could have it so that every time I call addObject the object is automatically saved into a hashmap or the like. This works, but relies on my properly adding and deleting each object so the map doesn't grow indefinitely. The biggest issue to me is that this involves having some hidden static map in my ModelObject class that just feels...wrong somehow. The other option is to have some method that can interpret the labels. All of these labels are derived from the underlying objects. So I can look at the signal label, for instance, and say "these 20 characters are the port" to figure out what port I need. This would allow me to quickly figure out what I need. However, if the label method is changed the translateLabelToObject method needs to be updated as well or everything breaks. Which solution is cleaner, or possibly a cleaner solution than either of above? For the record I'm working with sufficient number of variables to make direct comparison a little slow, but not enough to be concerned about memory overhead, written in java. All objects that have labels I need to look up extend the same parent class.

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • What's a good way to get an IT internship? [closed]

    - by user1419715
    I'm a second year CS student who's worked really hard to build and expand my skills. I've spent the past week now trying to find a place to volunteer (i.e. work for FREE) so I can get a little bit of in-the-door experience with web development. I have a portfolio with several decent projects, a handful of languages and other hard/soft skills that employers constantly say they're clamoring for. I can't even get people to take my calls. This is me offering to work for them for FREE, remember. I'm in a reputable program at a respected school, get decent grades and...yeah, I've worked really hard to be presentable. On the rare occassions I actually get to speak to somebody at a design firm they hedge and do everything they can to get me off the phone. Nobody's ever expressed even the slightest interest in taking me on. The answer to the experience problem is supposed to be "you need to spend a year or two building up a big portfolio of projects on your own" so that employers will be impressed. I've done that. Websites, standalone apps, etc.. Nobody will even look at my resume, though. Question: Why does there seem to be so little interest in taking on upaid interns in the world of IT? Update: Sorry you all think I'm too aggressive or angry. It wasn't my intent to be a jerk to people while asking them for their opinions. That said, how would you feel if employer after employer turned you down cold when you offered yourself to them without asking for remuneration? One can't even get an unpaid job in this economy now, it seems. How am I going about my search? I find web firms in my area and contact them via email with a brief sales pitch of myself and a resume attached. Then a couple of days later I follow up with a phone contact. Nobody--anywhere--is advertising for interns of any kind. If there were I'm sure there'd be about 500 resumes per position, even unpaid. I've had good experiences in the past with cold-calling firms for actual paid jobs in other industries (hiring is a pain in the ass process and a call like this can show initiative while reducing a busy employer's need to do all the hiring overhead work), so I thought volunteering would work at least as well. My skills are pretty good for a CS student and include the usual suspects: HTML/CSS/Javascript, Python, Java, C, C#/.Net etc etc. I made a point on my resume to tie each ability claim to a project as well. Oh, and regarding the "working for free still costs the employer money" argument: that's an excellent point I hadn't though of. But it means...what? I have to pay the employer for the privilege of working there now?

    Read the article

  • Efficient way to find unique elements in a vector compared against multiple vectors

    - by SyncMaster
    I am trying find the number of unique elements in a vector compared against multiple vectors using C++. Suppose I have, v1: 5, 8, 13, 16, 20 v2: 2, 4, 6, 8 v3: 20 v4: 1, 2, 3, 4, 5, 6, 7 v5: 1, 3, 5, 7, 11, 13, 15 The number of unique elements in v1 is 1 (i.e. number 16). I tried two approaches. Added vectors v2,v3,v4 and v5 into a vector of vector. For each element in v1, checked if the element is present in any of the other vectors. Combined all the vectors v2,v3,v4 and v5 using merge sort into a single vector and compared it against v1 to find the unique elements. Note: sample_vector = v1 and all_vectors_merged contains v2,v3,v4,v5 //Method 1 unsigned int compute_unique_elements_1(vector<unsigned int> sample_vector,vector<vector<unsigned int> > all_vectors_merged) { unsigned int duplicate = 0; for (unsigned int i = 0; i < sample_vector.size(); i++) { for (unsigned int j = 0; j < all_vectors_merged.size(); j++) { if (std::find(all_vectors_merged.at(j).begin(), all_vectors_merged.at(j).end(), sample_vector.at(i)) != all_vectors_merged.at(j).end()) { duplicate++; } } } return sample_vector.size()-duplicate; } // Method 2 unsigned int compute_unique_elements_2(vector<unsigned int> sample_vector, vector<unsigned int> all_vectors_merged) { unsigned int unique = 0; unsigned int i = 0, j = 0; while (i < sample_vector.size() && j < all_vectors_merged.size()) { if (sample_vector.at(i) > all_vectors_merged.at(j)) { j++; } else if (sample_vector.at(i) < all_vectors_merged.at(j)) { i++; unique ++; } else { i++; j++; } } if (i < sample_vector.size()) { unique += sample_vector.size() - i; } return unique; } Of these two techniques, I see that Method 2 gives faster results. 1) Method 1: Is there a more efficient way to find the elements than running std::find on all the vectors for all the elements in v1. 2) Method 2: Extra overhead in comparing vectors v2,v3,v4,v5 and sorting them. How can I do this in a better way?

    Read the article

  • WebLogic embedded LDAP crashes

    - by Spiff
    Our production admin server (WebLogic 10.3.5 running on Solaris 10) crashes from time to time. Logs show tons of these errors (several each minute): <1-Jun-2012 2:28:34 o'clock AM EDT> <Critical> <EmbeddedLDAP> <BEA-000000> <java.lang.NullPointerException at weblogic.socket.DevPollSocketMuxer.cleanupSocket(DevPollSocketMuxer.java:150) at weblogic.socket.DevPollSocketMuxer.cancelIo(DevPollSocketMuxer.java:166) at weblogic.socket.SocketMuxer.deliverExceptionAndCleanup(SocketMuxer.java:836) at weblogic.socket.SocketMuxer.deliverEndOfStream(SocketMuxer.java:760) at weblogic.ldap.MuxableSocketLDAP$LDAPSocket.close(MuxableSocketLDAP.java:128) at com.octetstring.vde.Connection.close(Connection.java:166) at com.octetstring.vde.WorkThread.executeWorkQueueItem(WorkThread.java:89) at weblogic.ldap.LDAPExecuteRequest.run(LDAPExecuteRequest.java:50) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) at weblogic.work.ExecuteThread.run(ExecuteThread.java:178) Eventually, the admin server runs out of memory: <1-Jun-2012 12:29:59 o'clock PM EDT> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed java.lang.OutOfMemoryError: GC overhead limit exceeded. One does not necessarily cause the other, but it seems like a pretty good fit. When inspecting the WebLogic code, we see this: void cleanupSocket(MuxableSocket paramMuxableSocket, SocketInfo paramSocketInfo) { this.sockRecords[paramSocketInfo.getFD()] = null; // DevPollSocketMuxer.java:150 super.cleanupSocket(paramMuxableSocket, paramSocketInfo); } protected void cancelIo(MuxableSocket paramMuxableSocket) { super.cancelIo(paramMuxableSocket); cleanupSocket(paramMuxableSocket, paramMuxableSocket.getSocketInfo()); // DevPollSocketMuxer.java:166 } So paramMuxableSocket.getSocketInfo() would be null. I'm at a loss for explaining this... Anyone have an idea? Thanks!

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • How to lock Windows7 from installing anything.

    - by Andy
    A family member continually needs me to reinstall his PC after it gets viruses and spyware. He claims that he never downloads anything but evidence suggests otherwise. Unless there is some way that watching 'videos' can get spyware on there. AFAIK the computer is kept up to date. Possible solutions? Make a login where its impossible for him to install anything. Windows 7 standard account doesn't appear to be enough. Is a standard account enough here? I tried this once before and he still seemed to get IE toolbars installed up the wazoo Somehow make an automated image where if he 'messes up' or even on log off the computer restores the whole drive image. Similar to what I've seen in Kinko's Something I've not thought of.... And I know you are all going to say 'stop fixing it you are an enabler'... yes I know but I'm not going to have another fight with my wife over helping her family... right now I'm doing this to keep her happy not the idiot with the 'video' addiction ;-) So the name of the game is minimizing my overhead.

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • Convert HTACCESS mod_rewrite directives to nginx format?

    - by Chris
    I'm brand new to nginx and I am trying to convert the app I wrote over from Apache as I need the ability to serve a lot of clients at once without a lot of overhead! I'm getting the hang of setting up nginx and FastCGI PHP but I can't wrap my head around nginx's rewrite format just yet. I know you have to write some simple script that goes in the server {} block in the nginx config but I'm not yet familiar with the syntax. Could anyone with experience with both Apache and nginx help me convert this to nginx format? Thanks! # ------------------------------------------------------ # # Rewrite from canonical domain (remove www.) # # ------------------------------------------------------ # RewriteCond %{HTTP_HOST} ^www.domain.com RewriteRule (.*) http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # This redirects index.php to / # # ------------------------------------------------------ # RewriteCond %{THE_REQUEST} ^[A-Z]+\ /(index|index\.php)\ HTTP/ RewriteRule ^(index|index\.php)$ http://domain.com/ [R=301,L] # ------------------------------------------------------ # # This rewrites 'directories' to their PHP files, # # fixes trailing-slash issues, and redirects .php # # to 'directory' to avoid duplicate content. # # ------------------------------------------------------ # RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)$ $1.php [L] RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)/$ http://domain.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]+\ /[^.]+\.php\ HTTP/ RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^([^.]+)\.php$ http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # If it wasn't redirected previously and is not # # a file on the server, rewrite to image generation # # ------------------------------------------------------ # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([a-z0-9_\-@#\ "'\+]+)/?([a-z0-9_\-]+)?(\.png|/)?$ generation/image.php?user=${escapemap:$1}&template=${escapemap:$2} [NC,L]

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >