Search Results

Search found 6845 results on 274 pages for 'systems'.

Page 31/274 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • Single database, multiple system dependency

    - by davenewza
    Consider an environment where we have a single, core database, with many separate systems using this one database. This leads to all of these systems have a common dependency, which ultimately introduces coupling between them. This means that we cannot always evolve systems independently of each other. Structural changes to the database (even if only intended for one, particular system), requires a full sweep test of ALL systems, and may require that other systems be 'patched' and subsequently released. This is especially tricky when you want to have separate teams working on different projects. What is a good 'pattern' to help in avoiding such coupling? I would imagine that a database should be exclusively depended on by one system. If other systems require data for whatever reason, they should request such from an API service of some kind. A drawback of this approach which comes to mind is performance: routing data between high-throughput systems through service calls is much slower than through a database connection.

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • Best cloud based IT Systems management services out there?

    - by Ryk
    Our startup organisation is growing fast in 2 different office locations. That brings new challenges and headaches. Our entire company is cloud based, and I am looking for a good product to manage our remote systems. Currently we do not have on-site AD servers, we are using the Windows Azure AD services, so cannot rely on group policies at this stage. I would like to be able to achieve the following: (they are all laptops) Remote Desktop Support Patch management Lock down software on machines (restrict them) Monitor and manage systems Other benefits would be good, but if I can achieve the ones listed above, it will go a long way. We have a combination of Windows 7 pro & Windows 8 & 8.1 machines. I am currently using Windows Intune, but it is really limited. Really just a glorified patch enforcer. Thank you in advance to your help.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • What do the 4 keyboard input method systems in 10.04 mean?

    - by Android Eve
    I am trying to install another language support (in addition to the default US). Checking that language checkbox in "Install / Remove Languages..." wasn't too difficult. :) But now I want to add keyboard support, too, for that language. Again, I am prompted with a nice listbox with the following 4 options: none ibus lo-gtk th-gtk But I have no idea what these mean. I googled "ubuntu 10.04 keyboard input method system none ibus lo-gtk th-gtk" but all I could find was descriptions of problems, not an actual definition. Could you please point me to a webpage where I can learn about the meanings of these 4 different methods and +'s and -'s of each?

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • How and why are operating systems bootable from a USB?

    - by user114638
    I'm told to install ubuntu on my laptop for work in order to learn shell scripting. I've read the best way is to install ubuntu on a USB stick and partition my HDD. I'm curious how an OS is bootable from a USB stick? Is it literally just a small interface that can be put anywhere? This reminds me of a time I downloaded a game onto my USB stick, when I brought it to my friends house he told me it will run slow if I don't install it and only run it from the usb, is this different from running ubuntu from a usb? Will ubuntu be slow?

    Read the article

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • Are your merchandise systems limiting growth? Oracle Retail's Merchandise Operations Management could be the answer

    - by user801960
    In this video, Lara Livgard, Director of Oracle Retail Strategy, introduces Oracle Retail Merchandise Operations Management (MOM), a set of integrated, modular solutions that support buying, pricing, inventory management and inventory valuation across a retailer’s channels, countries, and business models. MOM is the backbone of successful retail operations, providing timely and accurate visibility across the entire enterprise and enabling efficient supply-chain execution driven by plans and forecasts. It's modular architecture facilitates tailored and high-value implementations, giving retailers the information they need in order to offer a quality customer experience through a truly integrated multi-channel approach. Further information is available on the Oracle Retail website regarding Merchandise Operations Management.

    Read the article

  • Oracle's Sun Blade modular systems: Can they give VBlock a run for their money?

    - by llaszews
    Check out these new hardware and software integrated solutions from Oracle Sun (they are not Exa* machines): http://www.oracle.com/us/products/servers-storage/servers/blades/index.html They offer a number of solutions not offered by VBlock: Oracle DB, Oracle WebLogic, OS and end to end management. Be interesting to see how much adoption these machines get given the long time VBlock has been around and the Exa* focused strategy at Oracle.

    Read the article

  • What are the recommended resources for learning about the Actor model of concurrent systems?

    - by Larry OBrien
    The Actor concurrency model is clearly gaining favor. Is there a good book that presents the patterns and pitfalls of the model? I am thinking about something that would discuss, for instance, the problems of consistency and correctness in the context of hundreds or thousands of independent Actors. It would be okay if it were associated with a specific language (Erlang, I would imagine, since that seems universally regarded as the proven implementation of Actors), but I am hoping for something more than an introductory chapter or two. I'm actually most interested in Actors as they are implemented in Scala, if there are any such resources available.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based? [closed]

    - by Parhs
    I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?

    Read the article

  • What are some alternatives to ASI iMIS Content Management Systems? [closed]

    - by SLY
    Possible Duplicate: Which Content Management System (CMS)/Wiki should I use? I am working with a team to select a new content management system for a large membership organization (around 25,000 members). The organization has revenue so I'm not looking for a dirt cheap solution. The site currently uses ASI iMIS which is based on ColdFusion. It's difficult to work with and not flexible for our needs. What other possible alternatives to ASI iMIS are there? Ideally the solution would have some sort of support from the vendor. So far I've come up with: Drupal/Acquia SDL Tridion Plone Ellington (probably too news like) Pinax (probably not developed enough)

    Read the article

  • EZ Systems publie trois patchs de sécurité, qui concernent des failles sur les versions 4.1 et 4.2 d

    EZ System, éditeur du gestionnaire de contenu EZ Publish vient de publier une série de trois patchs de sécurité. [IMG]http://djug.developpez.com/rsc/Ez-publish-Logo_medium.gif[/IMG] ces patchs concernent des failles affectant les versions 4.1 et 4.2 du CMS, il est vivement recommandé d'appliquer ce patch. -> Les patchs se trouvent ici http://ez.no/developer/security/secu...y_in_ez_search -> Communiqué officiel http://share.ez.no/blogs/ez/security...lish-instances...

    Read the article

  • How do I dissuade users from using the same password with similar systems?

    - by Resorath
    I'm building a web application that connects to other web services (using strictly anonymous binding, so no user passwords are being used). However the web application maintains its own users itself, and is required to ask certain details such as e-mail addresses and public linking information to these other web services (for example, a username but not a password). I want to deter or prevent users from reusing passwords in my application that they have also used in the applications I'm linking to. For example, if I ask for their e-mail and provide me with their gmail address, I don't want them using their gmail password for my system. Another example would be reusing a password to a linked system in which they also gave me their username. One idea I had was to simply try using the information they gave me, along with the password they are trying to store and log in to these external web applications to test the password - then immediately unbind if I was successful and ask the user to use a different password. However I suspect there is a host of morale and legal issues there. The reason this is a big deal to me is accountability. My application is simply not funded enough to invest properly in security around user passwords. A salted, hashed password in a public SQL-like database is as secure as it gets. So if passwords and linked usernames or e-mails get out, I don't want my userbase compromised.

    Read the article

  • Is there an industry standard for systems registered user permissions in terms of database model?

    - by EASI
    I developed many applications with registered user access for my enterprise clients. In many years I have changed my way of doing it, specially because I used many programming languages and database types along time. Some of them not very simple as view, create and/or edit permissions for each module in the application, or light as access or can't access certain module. But now that I am developing a very extensive application with many modules and many kinds of users to access them, I was wondering if there is an standard model for doing it, because I already see that's the simple or the light way won't be enough.

    Read the article

  • What is the best agile project management technique for developing innovative software systems?

    - by user654019
    I am involved with the development of innovative software. The development is innovative since we don't know how to develop it and what algorithm should we use to implement and nobody else did it before. The process consists of several stages of studying books/papers, suggesting algorithms, writing prototypes and comparing the result with actual data. We hope that after some iteration, we converge to a valid software system. What is the best project management approach that we can use? Is there any project management software for these types of projects?

    Read the article

  • Would there be any negative side-effects of sharing /var/cache/apt/ between two systems?

    - by ændrük
    In the interest of conserving bandwidth, I'm considering mounting a VirtualBox host's /var/cache/apt as /var/cache/apt in the guest. Both host and guest are Ubuntu 10.10 32-bit. Would there be any negative consequences to doing this? I'm aware of the more robust solutions like apt-proxy, but I'd prefer this simpler solution if it's possible in order to spare the host the overhead of running extra services.

    Read the article

  • Architecting persistence (and other internal systems). Interfaces, composition, pure inheritance or centralization?

    - by Vandell
    Suppose that you need to implement persistence, I think that you're generally limited to four options (correct me if I'm wrong, please) Each persistant class: Should implement an interface (IPersistent) Contains a 'persist-me' object that is a specialized object (or class) that's made only to be used the class that contains it. Inherit from Persistent (a base class) Or you can create a gigantic class (or package) called Database and make your persistence logic there. What are the advantages and problems that can come from each of one? In a small (5kloc) and algorithmically (or organisationally) simple app what is probably the best option?

    Read the article

  • Could it be more efficient for systems in general to do away with Stacks and just use Heap for memory management?

    - by Dark Templar
    It seems to me that everything that can be done with a stack can be done with the heap, but not everything that can be done with the heap can be done with the stack. Is that correct? Then for simplicity's sake, and even if we do lose a little amount of performance with certain workloads, couldn't it be better to just go with one standard (ie, the heap)? Think of the trade-off between modularity and performance. I know that isn't the best way to describe this scenario, but in general it seems that simplicity of understanding and design could be a better option even if there is a potential for better performance.

    Read the article

  • Hardware refresh of Solaris 10 systems? Try this!

    - by mgerdts
    I've been seeing quite an uptick in the people that are wanting to install Solaris 11 when they are doing hardware refreshes.  I applaud that effort - Solaris 11 (and 11.1) have great improvements that advance the state of the art and make the best use of the latest hardware. Sometimes, however, you really don't want to disturb the OS or upgrade to the a later version of an application that is certified with Solaris 11.  That's a great use of Solaris 10 Zones.  If you are already using Solaris Cluster, or would like to have more protection as you put more eggs in an ever growing basket, check out solaris10 Brand Zone Clusters.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >