Search Results

Search found 2264 results on 91 pages for 'sounds'.

Page 59/91 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • Alternatives to multiple inheritance for my architecture (NPCs in a Realtime Strategy game)?

    - by Brettetete
    Coding isn't that hard actually. The hard part is to write code that makes sense, is readable and understandable. So I want to get a better developer and create some solid architecture. So I want to do create an architecture for NPCs in a video-game. It is a Realtime Strategy game like Starcraft, Age of Empires, Command & Conquers, etc etc.. So I'll have different kinds of NPCs. A NPC can have one to many abilities (methods) of these: Build(), Farm() and Attack(). Examples: Worker can Build() and Farm() Warrior can Attack() Citizen can Build(), Farm() and Attack() Fisherman can Farm() and Attack() I hope everything is clear so far. So now I do have my NPC Types and their abilities. But lets come to the technical / programmatical aspect. What would be a good programmatic architecture for my different kinds of NPCs? Okay I could have a base class. Actually I think this is a good way to stick with the DRY principle. So I can have methods like WalkTo(x,y) in my base class since every NPC will be able to move. But now lets come to the real problem. Where do I implement my abilities? (remember: Build(), Farm() and Attack()) Since the abilities will consists of the same logic it would be annoying / break DRY principle to implement them for each NPC (Worker,Warrior, ..). Okay I could implement the abilities within the base class. This would require some kind of logic that verifies if a NPC can use ability X. IsBuilder, CanBuild, .. I think it is clear what I want to express. But I don't feel very well with this idea. This sounds like a bloated base class with too much functionality. I do use C# as programming language. So multiple inheritance isn't an opinion here. Means: Having extra base classes like Fisherman : Farmer, Attacker won't work.

    Read the article

  • "static" as a semantic clue about statelessness?

    - by leoger
    this might be a little philosophical but I hope someone can help me find a good way to think about this. I've recently undertaken a refactoring of a medium sized project in Java to go back and add unit tests. When I realized what a pain it was to mock singletons and statics, I finally "got" what I've been reading about them all this time. (I'm one of those people that needs to learn from experience. Oh well.) So, now that I'm using Spring to create the objects and wire them around, I'm getting rid of static keywords left and right. (If I could potentially want to mock it, it's not really static in the same sense that Math.abs() is, right?) The thing is, I had gotten into the habit of using static to denote that a method didn't rely on any object state. For example: //Before import com.thirdparty.ThirdPartyLibrary.Thingy; public class ThirdPartyLibraryWrapper { public static Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... ThirdPartyLibraryWrapper.newThingy(input); //After public class ThirdPartyFactory { public Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... thirdPartyFactoryInstance.newThingy(input); So, here's where it gets touchy-feely. I liked the old way because the capital letter told me that, just like Math.sin(x), ThirdPartyLibraryWrapper.newThingy(x) did the same thing the same way every time. There's no object state to change how the object does what I'm asking it to do. Here are some possible answers I'm considering. Nobody else feels this way so there's something wrong with me. Maybe I just haven't really internalized the OO way of doing things! Maybe I'm writing in Java but thinking in FORTRAN or somesuch. (Which would be impressive since I've never written FORTRAN.) Maybe I'm using staticness as a sort of proxy for immutability for the purposes of reasoning about code. That being said, what clues should I have in my code for someone coming along to maintain it to know what's stateful and what's not? Perhaps this should just come for free if I choose good object metaphors? e.g. thingyWrapper doesn't sound like it has state indepdent of the wrapped Thingy which may itself be mutable. Similarly, a thingyFactory sounds like it should be immutable but could have different strategies that are chosen among at creation. I hope I've been clear and thanks in advance for your advice!

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • Modelling highly specific business requirements

    - by AndyBursh
    How can one go about modelling highly specific business requirements, which have no precedent in the system? Take for example the following requirement: When a purchase order contains N lines, is over X value in total and is being recorded against project Y, an email needs to be sent to persons A and B with the details This requirement supplements other requirements surrounding purchase orders, but comes in at a much later date in response to some ongoing problem elsewhere in the business. Persons A and B are not part of any role or group in the system, and don't hold any specific responsibility; they are simply the two people the business has appointed to receive these emails in this very specific case. Projects are also data driven, so project Y has no special properties to distinguish it from any other project. The only way to identify it is to compare its identifier to a magic number. How can one go about modelling this kind of case without introducing too much additional complexity? That I can think of right now, there are a couple of options. Perform the checks and actions inline with the existing code. Here we find the correct spot in the code, check the conditions in the requirement and send the emails to hardcoded addresses. Of course this is fraught with issues. At the very least it stops working if one of these people leaves or changes their email address. At worst you have to ensure that any tests and test data are aware that additional actions are taken for a specific set of criteria. Introduce some form of events system. Here we introduce an eventing system, so that we might react to some event, and fulfil the requirement outside of the usual path of execution. This sounds like a cleaner solution than option 1, but the work involved is ultimately probably slightly overkill for this one small requirement. That said, having it in place does allow the system to handle these kinds of specific requirements consistently and easily in the future. Are there any other (good/better) ways of handling highly specific requirements? I mean other than telling the other parts of the business no!

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • I didn't mean to become a database developer, but now I am. Should I stop or try to get better?

    - by pretlow majette
    20 years ago I couldn't afford even a cheap POS program when I opened my first surf shop in the Virgin Islands. I bought a copy of Paradox (remember that?) in 1990 and spent months in a back room scratching out a POS application. Over many iterations, including a switch to Access (2000)/SQL Server (2003), I built a POS and backoffice solution that runs four stores with multiple cash registers, a warehouse and office. Until recently, all my stores were connected to the same LAN (in a small shopping center) and performance wasn't an issue. Now that we've opened a location in the States that's changed. Connecting to my local server via the internet has slowed that locations application to a crawl. This is partly due to the slow and crappy dsl service we have in the Virgin Islands, and partly due to my less-than-professional code and sql. With other far-away stores in the works, I need a better solution. I like my application. My staff knows it well, and I'm not inclined to take on the expense of a proper commercial solution. So where does that leave me? I should probably host my sql online to sidestep the slow dsl here. I think I can handle cleaning up my SQL querries to speed that up a bit. What about Access? My version seems so old, but I don't like the newer versions with the 'ribbon'. There are so many options... Should I be learning Visual Studio with an eye on moving completely to the web? Will my VBA skills help me at all there? I don't have the luxury of a year at the keyboard to figure it out anymore. What about dotnetnuke, sharepoint, or lightswitch? They all seem like possibilities, but even understanding their capabilities is daunting. I'm pretty deep into it, but maybe I should bail and hire a consultant or programmer. That sounds expensive tho, and there's no guarantee there either... Any advice would be greatly appreciated. Or, if anybody is interested in buying a small chain of surf shops...

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • Variable number of GUI Buttons

    - by Wakaka
    I have a generic HTML5 Canvas GUI Button class and a Scene class. The Scene class has a method called createButton(), which will create a new Button with onclick parameter and store it in a list of buttons. I call createButton() for all UI buttons when initializing the Scene. Because buttons can appear and disappear very often during rendering, Scene would first deactivate all buttons (temporarily remove their onclick, onmouseover etc property) before each render frame. During rendering, the renderer would then activate the required buttons for that frame. The problem is that part of the UI requires a variable number of buttons, and their onclick, onmouseover etc properties change frequently. An example is a buffs system. The UI will list all buffs as square sprites for the current unit selected, and mousing over each square will bring up a tooltip with some information on the buff. But the number of buffs is variable thus I won't know how many buttons to create at the start. What's the best way to solve this problem? P.S. My game is in Javascript, and I know I can use HTML buttons, but would like to make my game purely Canvas-based. Create buttons on-the-fly during rendering. Thus I will only have buttons when I require them. After the render frame these buttons would be useless and removed. Create a fixed set of buttons that I'm going to assume the number of buffs per unit won't exceed. During each render frame activate the buttons accordingly and set their onmouseover property. Assign a button to each Buff instance. This sounds wrong as the buff button is a part of the GUI which can only have one unit selected. Assigning a button to every single Buff in the game seems to be overkill. Also, I would need to change the button's position every render frame since its order in the unit's list of buffs matter. Any other solutions? I'm actually quite for idea (1) but am worried about the memory/time issues of creating a new Button() object every render frame. But this is in Javascript where object creation is oh-so-common ({} mainly) due to automatic garbage collection. What is your take on this? Thanks!

    Read the article

  • F# and the useful infinite Sequence (I think)

    - by MarkPearl
    So I have seen a few posts done by other F# fans on solving project Euler problems. They looked really interesting and I thought with my limited knowledge of F# I would attempt a few and the first one I had a look at was problem 5. Which said : “2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest number that is evenly divisible by all of the numbers from 1 to 20?” So I jumped into coding it and straight away got stuck – the C# programmer in me wants to do a loop, starting at one and dividing every number by 1 to 20 to see if they all divide and once a match is found, there is your solution. Obviously not the most elegant way but a good old brute force approach. However I am pretty sure this would not be the F# way…. So after a bit of research I found the Sequences and how useful they were. Sequences seemed like the beginning of an approach to solve my problem. In my head I thought - create a sequence, and then start at the beginning of it and move through it till you find a value that is divisible by 1 to 20. Sounds reasonable? So the question is begged - how would you create a sequence that you are sure will be large enough to hold the solution to the problem? Well… You can’t know! Some more googling and I found what I would call infinite sequences – something that looks like this… let nums = 1 |> Seq.unfold (fun i -> Some (i, i + 1))   My interpretation of this would be as follows… create a sequence, and whenever it is called add 1 to its size (I would appreciate someone helping me on wording this right functionally). Something that I don’t understand fully yet is the forward pipe operator (|>) which I think plays a key role in this code. With this in hand I was able to code a basic optimized solution to this problem. I’m going to go over it some more before I post the full code just in case!

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • HD Tune warning for "Reallocated Event Count" with a new/unused drive. How serious is that?

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • error reading keytab file krb5.keytab

    - by Banjer
    I've noticed these kerberos keytab error messages on both SLES 11.2 and CentOS 6.3: sshd[31442]: pam_krb5[31442]: error reading keytab 'FILE: / etc/ krb5. keytab' /etc/krb5.keytab does not exist on our hosts, and from what I understand of the keytab file, we don't need it. Per this kerberos keytab introduction: A keytab is a file containing pairs of Kerberos principals and encrypted keys (these are derived from the Kerberos password). You can use this file to log into Kerberos without being prompted for a password. The most common personal use of keytab files is to allow scripts to authenticate to Kerberos without human interaction, or store a password in a plaintext file. This sounds like something we do not need and is perhaps better security-wise to not have it. How can I keep this error from popping up in our system logs? Here is my krb5.conf if its useful: banjer@myhost:~> cat /etc/krb5.conf # This file managed by Puppet # [libdefaults] default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC preferred_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_realm = FOO.EXAMPLE.COM dns_lookup_kdc = true clockskew = 300 [logging] default = SYSLOG:NOTICE:DAEMON kdc = FILE:/var/log/kdc.log kadmind = FILE:/var/log/kadmind.log [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false retain_after_close = false minimum_uid = 0 debug = false banner = "Enter your current" } Let me know if you need to see any other configs. Thanks. EDIT This message shows up in /var/log/secure whenever a non-root user logs in via SSH or the console. It seems to only occur with password-based authentication. If I do a key-based ssh to a server, I don't see the error. If I log in with root, I do not see the error. Our Linux servers authenticate against Active Directory, so its a hearty mix of PAM, samba, kerberos, and winbind that is used to authenticate a user.

    Read the article

  • i5 540M or i7 720QM for laptop running VMs and software development tools?

    - by Donald Hughes
    I'm a software developer that would primarily be running Windows 7 as the primary operating system. On a typical day, I might, at any given moment, be running Visual Studio, Expression Web, SQL Server developer (and Management Console), IIS, Photoshop, a dozen browser tabs in 2-3 different browsers, Skype video chat, streaming music, and a couple of VMs (WinXP and Ubuntu) for testing/experimentation. Obviously, RAM is a concern, which is why I plan to use 8 GB so I can devote enough to the VMs to be usable. I'm also tempted to use an ExpressCard SSD for storing the VM disks to ease disk contention. And I know that that is asking a lot from a laptop, and I should just use a desktop, but I need to be able to take my work with me between several locations. It seems that at a reasonable price point, it comes down to the i5 540M versus the i7 720QM. I'm leaning toward the i7 since it would allow me to dedicate a whole hyperthreaded core to each VM, and still have two cores left for the primary OS. I've heard that the i5 has better battery life, but I'm curious for my scenario if there would be a meaningful difference. I don't usually work without a plug, but I do occasionally ride the train or fly and it would be nice to have at least 3 hours of juice for unusual circumstances. And, finally, for this usage scenario, would a dedicated video option be preferred over the i5's integrated video? It sounds like Visual Studio 2010 (and Windows 7) can take advantage of the video card.

    Read the article

  • Ideal Bacula appliance?

    - by Ricket
    I'm an intern at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • Audio Static/Interference regardless of audio interface?

    - by Tom
    I currently am running a media center/server on a Lubuntu machine. The machine specs: Core 2 Duo Extreme EVGA SLI 680i MotherBoard 2 GB DDR2 Ram 3 Hard Drives no raid - WD Caviar Black, Green, and Samsung Spinpoint Galaxy GTX 220 1GB External USB Creative XI-FI Extreme Card 550W Power Supply This machine is hooked up through an optical cable to an ONKYO HTR340 Receiver through the XIFI card. Whenever I play any audio regardless if it is through XBMC, the default audio player, a flash video, etc, I get a horrible static sound that randomly gets louder. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 This static comes in randomly, sometimes going away for short periods, but eventually always comes back. So far I have tried everything I could think of: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Pretty much everything short of swapping the CPU. I haven't been able to narrow down the problem and it is getting frustrating. Is it possible that the CPU is faulty and might cause a problem such as this, or that the PC case is shorting out the motherboard? Any kind of suggestions will be appreciated.

    Read the article

  • Cisco ASA Act as a Hardware Security Module?

    - by Derek
    Hello, We have a partner that is requiring us to get a HSM for a web application that we host for them. This is something new for us, we've always installed our SSL certificates on our web servers and never needed a hardware device. We currently have 2 Cisco ASA 5510 firewalls in an active/standby configuration. Both ASAs have a ASA-SSM-10 security module installed in them. The web application is a standard HTTPS webpage with no authentication required. I was wondering if we could use our Cisco ASAs to meet this requirement or if we'll have to buy another device. I was doing some searching and read about Cisco's clientless webvpn feature. It sounds like it might work, but I'm not sure. We basically want the ASA to handle the SSL and proxy the connection to our web servers. We do not want to prompt for a username or password to connect or show any portals, just display the web page. If the ASA cannot do this, does any one have any recommendations for network attached hardware security modules? We are using VMware vCenter, so we'd rather have an external device attached to the network, rather than buying HSM cards for every ESXi host. Thanks, Derek

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • "Modern" Ethernet over coax

    - by Electrons_Ahoy
    So, I've just bought a house. It's reasonably new - built in the early '00s. One of the features that got built in was a cable TV drop in every room. The cabling is gorgeous - there's even a wiring cabinet of sorts in a closet where the cables all tie together to the splitter to the outside line. Of course, my problem is that I only own the one TV. I do, however, own a few computers. What I would love to be able to do is drop a switch in the wiring closet and run 100/1000BASE-T ethernet over the coax in the walls I wouldn't otherwise be using. My fantasy would be if you could get some kind of adapter-plug-thing that would take a coax plug on one side and a cat5/RJ45 plug on the other. Had anyone else done this? Any suggestions? (There are a few other options that suggest themselves - first, I could just use the existing cabling channels and re-run cat5 or 6 through the walls. While tempting, that sounds like more work than I really want to put in, so I'm calling that Plan B. Also, I could just scare up a mess of old 10BASE2 cards and run the house on thinnet, all mid-90s style. While I think I'd get major style points for that, I don't think I can get a 10BASE2 adapter for the new laptop. Also, I have all these super-snazzy gigabit adaptors I'd like to be using. And so forth.)

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >