Search Results

Search found 383 results on 16 pages for 'isolation'.

Page 8/16 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Session state provider and atomic operations

    - by vtortola
    Hi, I've been thinking about this and it is blowing my mind... How does a session state provider properly works internally? I mean, I tried to write a custom session state provider based on Azure Tables or Blobs, but quickly I realized that because there is no way to ensure an atomic operation or establish a lock, race conditions are suitable to happen when several web servers do operation on that shared information. I know that there is a SQL Server Session State Provider (SQLS-SSP) and people is happy with it, so I guess that it's using some kind of transaction isolation level in order to accomplish some degree of concurrent safety, like checking is the data is lock (a simple column), locking it if not and returning the data in an atomic operation, but is that so? what does happen if the data is lock? does it returns an error? block the call for a while? returns it in read-only fashion? Cloud computing paradigms could be somehow new, but webfarms have been here for a while, so as I'm pretty new on it... do you recommend any good lecture about the topic? Thanks.

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • ExaLogic virtual datacenter live at Qualogy

    - by JuergenKress
    Just a quick post to celebrate another siginificant milestone for Exalogic! After a few days of preparations and some hard work we succeeded in upgrading our Exalogic quarterrack to the newly released Elastic Cloud version 2.0.1.1.0. This version was just recently released on July 25th. This new version turns your Exalogic into a virtual datacenter with many very neat cloud provisioning capabilities. There are many more possibilities in this version to provide strict network and vServer group isolation where needed and it helps you manage multitenancy and delegate your cloud administration. How did we fare? Apart from some small inconveniences and minor issues I can tell you it all went remarkably well, provided you do proper homework on the prerequisite requirements and you stick to the instructions all the way through (there’s some 37 steps to cover). We as a specialized Exalogic partner had early access to this version and did some early adopter work. As a customer this is all done for you as Oracle will deliver a new Exalogic with this version from the factory if you so desire, or upgrade your current ones. O fcourse, Qualogy can do this for you as well! Interested contact the Qualogy team [email protected]! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: ExaLogic,Qualogy,ExaLogic demo,Exalogic datacenter,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • Adventures in Lab Management Configuration: Part 3 of 3

    - by Enrique Lima
    This is long overdue.  But here it is. In the previous two sections I have discussed on how I got a CMMI v4.2 to take on the same fields as v5 and therefore be able to communicate with MTM and Lab Manager.  And that was quite a success. Yet when I opened up Lab Management while it was fully aware of the VMs being there, it refused to let me enroll them into an environment.  It kept stating there was no suitable host to deploy the VM to, error TF259115. This was an indication something was not matching the expected network configuration between TFS and Hyper-V/SCVMM. So, here are a couple of things that took place: Verified the network segment specified for network isolation matched what was configured physically for either DHCP or manually assigned IP addressing for the guest VMs Made sure TFS was fully aware of the configuration settings for the network location name.  For that I issued:  tfsconfig lab /settings /networklocation:”<name of the network location configured in SCVMM” On that last item, that was key to making sure Lab Management communicated with the VMs and for it to allow enrollment into the new Virtual Environment.

    Read the article

  • What's wrong with circular references?

    - by dash-tom-bang
    I was involved in a programming discussion today where I made some statements that basically assumed axiomatically that circular references (between modules, classes, whatever) are generally bad. Once I got through with my pitch, my coworker asked, "what's wrong with circular references?" I've got strong feelings on this, but it's hard for me to verbalize concisely and concretely. Any explanation that I may come up with tends to rely on other items that I too consider axioms ("can't use in isolation, so can't test", "unknown/undefined behavior as state mutates in the participating objects", etc.), but I'd love to hear a concise reason for why circular references are bad that don't take the kinds of leaps of faith that my own brain does, having spent many hours over the years untangling them to understand, fix, and extend various bits of code. Edit: I am not asking about homogenous circular references, like those in a doubly-linked list or pointer-to-parent. This question is really asking about "larger scope" circular references, like libA calling libB which calls back to libA. Substitute 'module' for 'lib' if you like. Thanks for all of the answers so far!

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • handling multiple interviews / offers [closed]

    - by farble1670
    What's the best way to handle a situation where you have, or expect to have multiple offers? The ideal situation is that your several offers come in about the same time, and you make a choice. this is not how it happens though. You may have an offer, and several near-final interviews lined up for the following days or weeks. One way to handle it would be to ask for a longer time to decide on the first offers you receive. 2 weeks? This gives time to rush the rest of the things you have going through to an end. i question whether asking for 2 weeks to decide is reasonable though. My guess is that an employer would see through that and force your hand. Another way to handle it would be to accept the first offer, and ask for a reasonable period before your start date, then simply "quit" the first position before you ever start if something better comes along. On one hand, employment is at-will, and employers exercise this fact regularly. On the other hand, it seems morally the wrong thing, and has the potential to burn some bridges. And of course the last option is to simply evaluate each offer in isolation, and accept or reject within the given time frame. any thoughts?

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • What are the design principles that promote testable code? (designing testable code vs driving design through tests)

    - by bot
    Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?

    Read the article

  • Multiple routers, subnets, gateways etc

    - by allentown
    My current setup is: Cable modem dishes out 13 static IP's (/28), a GB switch is plugged into the cable modem, and has access to those 13 static IP's, I have about 6 "servers" in use right now. The cable modem is also a firewall, DHCP server, and 3 port 10/100 switch. I am using it as a firewall, but not currently as a DHCP server. I have plugged into the cable modem, two network cables, one which goes to the WAN port of a Linksys Dual Band Wireless 10/100/1000 router/switch. Into the linksys are a few workstations, a few printers, and some laptops connecting to wifi. I set the Linksys to use take static IP, and enabled DHCP for the workstations, printers, etc in 192.168.1.1/24. The network for the Linksys is mostly self contained, backups go to a SAN, on that network, it all happens through that switch, over GB. But I also get internet access from it as well via the cable modem using one static IP. This all works, however, I can not "see" the static IP machines when I am on the Linksys. I can get to them via ssh and other protocols, and if I want to from "outside", I open holes, like 80, 25, 587, 143, 22, etc. The second wire, from the cable modem/fireall/switch just uplinks to the managed GB switch. What are the pros and cons of this? I do not like giving up the static IP to the Linksys. I basically have a mixed network of public servers, and internal workstations. I want the public servers on public IP's because I do not want to mess with port forwarding and mappings. Is it correct also, that if someone breaches the Linksys wifi, they still would have a hard time getting to the static IP range, just by nature of the network topology? Today, just for a test, I toggled on the DHCP in the firewall/cable modem at 10.1.10.1/24 range, the Linksys is n the 192.168.1.100/24 range. At that point, all the static IP machines still had in and out access, but Linksys was unreachable. The cable modem only has 10/100 ports, so I will not plug anything but the network drop into it, which is 50Mb/10Mb. Which makes me think this could be less than ideal, as transfers from the workstation network to the server network will be bottlenecked at 100Mb when I have 1000Mb available. I may not need to solve that, if isolation is better though. I do not move a lot of data, if any, from Linsys network to server network, so for it to pretend to be remote is ok. Should I approach this any different? I could enable DHCP on the cable modem/firewall, it should still send out the statics to the GB switch, but will also be a DHCP in 10.1.10.1/24 range? I can then plug the Linksys into the GB switch, which is now picking up statics and the 10.1.10.1/24 ranges, tell the Linksys to use 10.1.10.5 or so. Now, do I disable DHCP on the Linksys, and the cable modem/firewall will pass through the statics and 10.0.10.1/24 ranges as well? Or, could I open a second DHCP pool on the Linksys? I guess doing so gives me network isolation again, but it is just the reverse of what I have now. But I get out of the bottleneck, not that the Linksys could ever really touch real GB speeds anyway, but the managed switch certainly can. This is all because 13 statics are not that many. Right now, 6 "servers", the Linksys, a managed switch, a few SSL certs, and I am running out. I do not want to waste a static IP on the managed GB switch, or the Linksys, unless it provides me some type of benefit. Final question, under my current setup, if I am on a workstation, sitting at 192.168.1.109, the Linksys, with GB, and I send a file over ssh to the static IP machine, is that literally leaving the internet, and coming back in, or does it stay local? To me it seems like: Workstation (192.168.1.109) -> Linksys DHCP -> Linksys Static IP -> Cable Modem -> Server ( and it hits the 10/100 ports on the cable modem, slowing me down. But does it round trip the network, leave and come back in, limiting me to the 50/10 internet speeds? *These are all made up numbers, I do not use default router IP's as I will one day add a VPN, and do not want collisions. I need some recommendations, do I want one big network, or two isolated ones. Printers these days need an IP, everything does, I can not get autoconf/bonjour to be reliable on most printers. but I am also not sure I want the "server" side of my operation to be polluted by the workstation side of my operation. Unless there is some magic subetting I have not learned yet, here is what I am thinking: Cable modem 10/100, has 13 static IP, publicly accessible -> Enable DHCP on the cable modem -> Cable modem plugs into managed switch -> Managed switch gets 10.1.10.1 ssh, telnet, https admin management address -> Managed switch sends static IP's to to servers -> Plug Linksys into managed switch, giving it 10.1.10.2 static internally in Linksys admin -> Linksys gets assigned 10.1.10.x as its DHCP sending range -> Local printers, workstations, iPhones etc, connect to this -> ( Do I enable DHCP or disable it on the Linksys, just define a non over lapping range, or create an entirely new DHCP at 10.1.50.0/24, I think I am back isolated again with that method too? ) Thank you for any suggestions. This is the first time I have had to deal with less than a /24, and most are larger than that, but it is just a drop to a cabinet. Otherwise, it's a router, a few repeaters, and soho stuff that is simple, with one IP. I know a few may suggest going all DHCP on the servers, and I may one day, just not now, there has been too much moving of gear for me to be interested in that, and I would want something in the Catalyst series to deal with that.

    Read the article

  • Running each JUnit test in a separate JVM in Eclipse?

    - by HenryR
    I have a project with nearly 500 individual tests in around 200 test classes. Some of these tests don't do a great job of tearing down their own state after they're finished, and in Eclipse this results in some tests failing. The tests all pass when running the test suite from the command line via Ant. Can I enable 'test isolation' somehow in Eclipse? I don't mind if it takes longer to run. Long term, I will clean up the misbehaving tests, but in the short term I'd like to get the tests working.

    Read the article

  • Killing a deadlocked Task in .NET 4 TPL

    - by Dan Bryant
    I'd like to start using the Task Parallel Library, as this is the recommended framework going forward for performing asynchronous operations. One thing I haven't been able to find is any means of forcible Abort, such as what Thread.Abort provides. My particular concern is that I schedule tasks running code that I don't wish to completely trust. In particular, I can't be sure this untrusted code won't deadlock and therefore I can't be certain if a Task I schedule using this code will ever complete. I want to stay away from true AppDomain isolation (due to the overhead and complexity of marshaling), but I also don't want to leave a Task thread hanging around, deadlocked. Is there a way to do this in TPL?

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • Transitioning to Branching with TFS

    - by Rob
    Our team is currently using plain old TFS 2005, no branching, shared checkouts etc... I would like to introduce a DEV/MAIN/PROD branching system simillar to the basic flavor in the TFS Guidance document so that we can do some parallel dev, isolation, and firm up review and deployment processes. I have read most of the whitepapers etc. Do you guys have any practical advice, suggested tools, gotchas or reccomendations. Also, we plan to migrate to 2010 once it comes out - not sure if that would affect anything. I appreciate all the suggestions and help I can get as I am a branching neophyte. Thanks in advance.

    Read the article

  • Stop multiple sessions accessing the same file simultaneously

    - by Pablo
    Is it possible to lock a file to stop it being opened while GD library is accessing it? What I am looking to achieve is similar to a database 'serialzable' level of isolation... I want to ensure that only one session/user can access an image at a time to stop a 'dirty read'. May application allows users to add an image of choice to a bigger image. for example the big image is empty Raj & Janet upload their images Raj's session opens the big image. 1 ms later Janet's session opens the big image. Raj's session add's his image and saves the big image 1 ms later Janet's session adds his image and saves its version of the big image. As a result Raj's image is not in the final image as Janet's version overwrote it. I hope that makes it clear enough.

    Read the article

  • Simple object creation with DIY-DI?

    - by Runcible
    I recently ran across this great article by Chad Perry entitled "DIY-DI" or "Do-It-Yourself Dependency Injection". I'm in a position where I'm not yet ready to use a IoC framework, but I want to head in that direction. It seems like DIY-DI is a good first step. However, after reading the article, I'm still a little confused about object creation. Here's a simple example: Using manual constructor dependency injection (not DIY-DI), this is how one must construct a Hotel object: PowerGrid powerGrid; // only one in the entire application WaterSupply waterSupply; // only one in the entire application Staff staff; Rooms rooms; Hotel hotel(staff, rooms, powerGrid, waterSupply); Creating all of these dependency objects makes it difficult to construct the Hotel object in isolation, which means that writing unit tests for Hotel will be difficult. Does using DIY-DI make it easier? What advantage does DIY-DI provide over manual constructor dependency injection?

    Read the article

  • Experimenting with data to determine value - migration/methods?

    - by TCK
    Hey guys, I have a LOT of data available to me, and want to experiment with data that isn't currently being used in production. The obvious solution seems to be to make a copy of production data and integrate it with what I want to play around with, but I was wondering if there was a better (less expensive?) way to do this. Both isolation and integration are important. I'd like to be able to keep lightweight/experimental data assets apart from high volume production data, but also be able to integrate (RELATIVELY) painlessly if experimental assets are deemed useful. Thanks.

    Read the article

  • Any way to select without causing locking in MySQL?

    - by Shore
    Query: SELECT COUNT(online.account_id) cnt from online; But online table is also modified by an event, so frequently I can see lock by running show processlist. Is there any grammar in MySQL that can make select statement not causing locks? And I've forgotten to mention above that it's on a MySQL slave database. After I added into my.cnf:transaction-isolation = READ-UNCOMMITTED the slave will meet with error: Error 'Binary logging not possible. Message: Transaction level 'READ-UNCOMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'' on query So, is there a compatible way to do this?

    Read the article

  • Jasper error: Caused by SQLServerException: Transaction (Process ID 58) was deadlocked on thread | c

    - by Saky
    I got the above error in my jasper report mail. The query that is used in the report is quite complicated (for me). Reading different posts I conclude that to solve this the I have to change the query to SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO BEGIN TRANSACTION ... my query ... COMMIT TRANSACTION ? I wonder if this is the correct way to solve the error and that if it has any side effects? Has it happened to anyone in the Jasper reports? Does anyone know if there is a better solution exist to the problem? (Although that I have not yet tested the above solution, if anyone can give any insight on this will be helpful.)

    Read the article

  • In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

    - by Dan P
    Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment? update mytable set counter = counter + 1 I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible: update the counter within transaction #1 do some other stuff in transaction #1 update the counter with transaction #2 commit transaction #2 commit transaction #1 In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction? How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?

    Read the article

  • Learning Spring MVC For web-projects

    - by John
    I have looked at Spring MVC a few times briefly, and got the basic ideas. However whenever I look closely it seems to require you already know a whole load of 'core Spring'. The book I have for instance has a few hundred pages before it gets onto Spring MVC... which seems a lot to wade through. I'm used to being able to jump in, but there's so much bean-related stuff and XML, it just looks like a mass of data to consume. Does it simplify if you put the time in, or is Spring just a much bigger framework than I thought? is it possible to learn this side of it in isolation?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >